diff --git a/.gitignore b/.gitignore
index 5247bcce..65bcdef8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -15,12 +15,21 @@ data/eval
output_pred/
wandb/
src/dbgpt-hub-sql/dbgpt_hub_sql/data/*
+src/dbgpt-hub-gql/dbgpt_hub_gql/data/*
+src/dbgpt-hub-sql/codellama/*
+src/dbgpt-hub-gql/codellama/*
+src/dbgpt-hub-sql/wandb/*
+src/dbgpt-hub-gql/wandb/*
# But track the data/eval_data folder itself
!src/dbgpt-hub-sql/dbgpt_hub_sql/data/eval_data/
!src/dbgpt-hub-sql/dbgpt_hub_sql/data/dataset_info.json
!src/dbgpt-hub-sql/dbgpt_hub_sql/data/example_text2sql.json
+!src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example
+!src/dbgpt-hub-gql/dbgpt_hub_gql/data/dataset_info.json
+!src/dbgpt-hub-gql/dbgpt_hub_gql/data/example_text2sql.json
# Ignore everything under dbgpt_hub_sql/ouput/ except the adapter directory
+src/dbgpt-hub-sql/dbgpt_hub_sql/output/
src/dbgpt-hub-sql/dbgpt_hub_sql/output/adapter/*
!src/dbgpt-hub-sql/dbgpt_hub_sql/output/adapter/.gitkeep
src/dbgpt-hub-sql/dbgpt_hub_sql/output/logs/*
@@ -28,6 +37,14 @@ src/dbgpt-hub-sql/dbgpt_hub_sql/output/logs/*
src/dbgpt-hub-sql/dbgpt_hub_sql/output/pred/*
!src/dbgpt-hub-sql/dbgpt_hub_sql/output/pred/.gitkeep
+src/dbgpt-hub-gql/dbgpt_hub_gql/output/
+src/dbgpt-hub-gql/dbgpt_hub_gql/output/adapter/*
+!src/dbgpt-hub-gql/dbgpt_hub_gql/output/adapter/.gitkeep
+src/dbgpt-hub-gql/dbgpt_hub_gql/output/logs/*
+!src/dbgpt-hub-gql/dbgpt_hub_gql/output/logs/.gitkeep
+src/dbgpt-hub-gql/dbgpt_hub_gql/output/pred/*
+!src/dbgpt-hub-gql/dbgpt_hub_gql/output/pred/.gitkeep
+
# Ignore NLU output
src/dbgpt-hub-nlu/output
src/dbgpt-hub-nlu/data
diff --git a/README.md b/README.md
index eb83705b..51e273f0 100644
--- a/README.md
+++ b/README.md
@@ -31,6 +31,7 @@
## 🔥🔥🔥 News
- Support [Text2NLU](src/dbgpt-hub-nlu/README.zh.md) fine-tuning to improve semantic understanding accuracy.
+- Support [Text2GQL](src/dbgpt-hub-gql/README.zh.md) fine-tuning to generate graph query.
## Baseline
diff --git a/README.zh.md b/README.zh.md
index f5782905..b12f0525 100644
--- a/README.zh.md
+++ b/README.zh.md
@@ -29,6 +29,7 @@
## 🔥🔥🔥 News
- 支持 [Text2NLU](src/dbgpt-hub-nlu/README.zh.md) 微调, 提升意图识别准确率。
+- 支持 [Text2GQL](src/dbgpt-hub-gql/README.zh.md)微调,可以通过自然语言生成图查询语句。
## Baseline
- 更新日期: 2023/12/08
diff --git a/src/dbgpt-hub-gql/README.zh.md b/src/dbgpt-hub-gql/README.zh.md
new file mode 100644
index 00000000..c10cc04b
--- /dev/null
+++ b/src/dbgpt-hub-gql/README.zh.md
@@ -0,0 +1,236 @@
+# DB-GPT-GQL:利用LLMs实现Text-to-GQL
+
+## Baseline
+
+- 更新日期: 2024/08/26
+
+
+
+## Contents
+- [DB-GPT-GQL](#db-gpt-gql利用llms实现text-to-gql)
+ - [Baseline](#baseline)
+ - [Contents](#contents)
+ - [一、简介](#一简介)
+ - [二、Text-to-GQL微调](#二text-to-gql微调)
+ - [2.1、数据集](#21数据集)
+ - [2.2、基座模型](#22基座模型)
+ - [三、使用方法](#三使用方法)
+ - [3.1、环境准备](#31环境准备)
+ - [3.2、模型准备](#32模型准备)
+ - [3.3、模型微调](#33模型微调)
+ - [3.4、模型预测](#34模型预测)
+ - [3.5、模型评估](#35模型评估)
+ - [3.5.1、文本相似度评估](#351文本相似度评估)
+ - [3.5.2、语法正确性评估](#352语法正确性评估)
+ - [3.6、模型权重合并](#36模型权重合并)
+
+# 一、简介
+
+DB-GPT-GQL是一个面向图数据库查询语言的,利用LLMs实现Text-to-GQL翻译的模块。主要包含模型微调、Text-to-GQL预测,预测结果评估等步骤。关系型数据库领域的Text-to-SQL翻译任务发展至如今已经拥有了大量的数据集以及多维度的翻译结果评估流程。不同于已经逐渐成熟的Text-to-SQL翻译任务,Text-to-GQL翻译任务由于图查询语言缺乏统一规范、目标成为国际标准的ISOGQL尚未真正落地等原因,使得建立属于各类图查询语言的完整语料数据集和建立Text-to-GQL翻译结果评估机制成为了两个颇具挑战性的任务。
+
+DB-GPT-GQL不仅支持了基于多个大模型的微调、预测流程,在翻译结果评估方面也提供了两种评估方式,第一种是基于翻译结果与标准答案之间近似程度的文本相似度评估,这一评估方式适用于任何图查询语言,第二种则是基于不同图查询语言的语法解析器对翻译结果进行语法解析的语法正确性评估,目前已支持tugraph-db与tugraph-analytics两个数据库的图查询语言。
+
+未来DB-GPT-GQL将会实现基于翻译结果的执行计划正确性评估(不需要实际数据),以及更进一步的执行正确性评估(需要实际数据),并参考Text-to-SQL领域的spider数据集中的方法实现对数据集中查询语言复杂程度的分级。
+
+## 二、Text-to-GQL微调
+
+ 我们基于大语言模型的SFT来提升Text-to-GQL的效果。
+
+### 2.1、数据集
+
+本项目样例数据集为`Cypher(tugraph-db)`,其中包含tugraph-db提供的,可在tugraph-db上可执行的185条语料,存放在`/dbgpt_hub_gql/data/tugraph-db-example`文件夹中,当前可使用的数据集如下:
+
+- [Cypher(tugraph-db)](https://tugraph-web.oss-cn-beijing.aliyuncs.com/tugraph/datasets/text2gql/tugraph-db/tugraph-db.zip): 符合tugraph-db的Cypher语法的数据集,采用“ [语法制导的语料生成策略](https://mp.weixin.qq.com/s/rZdj8TEoHZg_f4C-V4lq2A)”,将查询语言模板结合多样化的schema生成查询语言,并使用大模型泛化与之对应的自然语言问题描述后生成的语料。[语料生成框架](https://github.com/TuGraph-contrib/Awesome-Text2GQL)现已开源,欢迎参与共建。
+
+- [GQL(tugraph-analytics)](https://tugraph-web.oss-cn-beijing.aliyuncs.com/tugraph/datasets/text2gql/tugraph-analytics/tugraph-analytics.zip): 符合tugraph-analytics的GQL语法的数据集,采用“ [语法制导的语料生成策略](https://mp.weixin.qq.com/s/rZdj8TEoHZg_f4C-V4lq2A)”,将查询语言模板结合多样化的schema生成查询语言,并使用大模型泛化与之对应的自然语言问题描述后生成的语料。
+
+请将下载并解压完成后的数据集放置在`/dbgpt_hub_gql/data/`目录下使用。
+
+### 2.2、基座模型
+
+DB-GPT-GQL目前已经支持的base模型有:
+
+ - [x] CodeLlama
+ - [x] Baichuan2
+ - [x] LLaMa/LLaMa2
+ - [x] Falcon
+ - [x] Qwen
+ - [x] XVERSE
+ - [x] ChatGLM2
+ - [x] ChatGLM3
+ - [x] internlm
+ - [x] Falcon
+ - [x] sqlcoder-7b(mistral)
+ - [x] sqlcoder2-15b(starcoder)
+
+
+
+模型可以基于quantization_bit为4的量化微调(QLoRA)所需的最低硬件资源,可以参考如下:
+
+| 模型参数 | GPU RAM | CPU RAM | DISK |
+| -------- | ------- | ------- | ------ |
+| 7b | 6GB | 3.6GB | 36.4GB |
+| 13b | 13.4GB | 5.9GB | 60.2GB |
+
+其中相关参数均设置的为最小,batch_size为1,max_length为512。根据经验,如果计算资源足够,为了效果更好,建议相关长度值设置为1024或者2048。
+
+## 三、使用方法
+本章将以仓库中自带的`/dbgpt_hub_gql/data/tugraph-db-example`文件夹中共计185条语料以及`CodeLlama-7B-Instruct`(需要额外下载)为样例,展示DB-GPT-GQL的完整功能。
+
+### 3.1、环境准备
+
+克隆项目并创建 conda 环境,
+```bash
+git clone https://github.com/eosphoros-ai/DB-GPT-Hub.git
+cd DB-GPT-Hub
+conda create -n dbgpt_hub_gql python=3.10
+conda activate dbgpt_hub_gql
+```
+
+进入DB-GPT-GQL项目目录,并使用poetry安装依赖
+```bash
+cd src/dbgpt-hub-gql
+pip install -e .
+```
+
+### 3.2、模型准备
+创建并进入codellama模型存放目录
+```bash
+mkdir codellama
+cd ./codellama
+```
+
+在`codellama`文件夹下创建`download.py`文件并将如下内容复制进入python文件中
+```python
+from modelscope import snapshot_download
+
+model_dir = snapshot_download("AI-ModelScope/CodeLlama-7b-Instruct-hf")
+```
+
+安装python依赖并下载模型
+```bash
+pip install modelscope
+python3 download.py
+```
+
+下载完成后,将模型文件软链接到`codellama`目录下
+```bash
+ln -s /root/.cache/modelscope/hub/AI-ModelScope/CodeLlama-7b-Instruct-hf ./
+```
+
+### 3.3、模型微调
+开始语料微调前需要手动将训练数据集在`./dbgpt_hub_gql/data/dataset_info.json`中注册,`./dbgpt_hub_gql/data/tugraph-db-example`文件夹中的训练数据集已经注册在其中,格式如下
+
+```json
+"tugraph_db_example_train": {
+ "file_name": "./tugraph-db-example/train.json",
+ "columns": {
+ "prompt": "instruction",
+ "query": "input",
+ "response": "output",
+ "history": "history"
+ }
+ }
+```
+
+在`dbgpt_hub_gql/scripts/train_sft.sh`中设置数据集,模型,以及微调结果输出路径,当前默认值适配`./dbgpt_hub_gql/data/tugraph-db-example`中的训练数据集以及`CodeLlama-7B-Instruct`模型,使用LoRA方法微调
+
+```shell script
+dataset="tugraph_db_example_train"
+model_name_or_path=${model_name_or_path-"codellama/CodeLlama-7b-Instruct-hf"}
+output_dir="dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora"
+```
+
+运行微调脚本,开始微调
+```bash
+sh dbgpt_hub_gql/scripts/train_sft.sh
+```
+
+### 3.4、模型预测
+
+在`./dbgpt_hub_gql/scripts/predict_sft.sh`中设置需要预测的数据集,模型,模型微调结果路径以及预测结果路径,当前默认值适配`./dbgpt_hub_gql/data/tugraph-db-example`中的开发数据集以及LoRA方法微调后的`CodeLlama-7B-Instruct`模型
+
+```shell script
+CUDA_VISIBLE_DEVICES=0,1 python dbgpt_hub_gql/predict/predict.py \
+ --model_name_or_path codellama/CodeLlama-7b-Instruct-hf \
+ --template llama2 \
+ --finetuning_type lora \
+ --predicted_input_filename dbgpt_hub_gql/data/tugraph-db-example/dev.json \
+ --checkpoint_dir dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora \
+ --predicted_out_filename dbgpt_hub_gql/output/pred/tugraph_db_example_dev.txt >> ${pred_log}
+```
+
+运行预测脚本,获取预测结果
+
+```bash
+sh dbgpt_hub_gql/scripts/predict_sft.sh
+```
+
+### 3.5、模型评估
+
+目前版本支持两种预测结果评估方法,第一种是基于Jaro–Winkler distance的文本相似度评估,第二种是基于`.g4`语法文件或图数据库现有语法解析器的语法正确性评估。
+
+#### 3.5.1、文本相似度评估
+
+文本相似度评估直接统计预测结果与标准结果的Jaro–Winkler distance
+
+```bash
+python dbgpt_hub_gql/eval/evaluation.py --input ./dbgpt_hub_gql/output/pred/tugraph_db_example_dev.txt --gold ./dbgpt_hub_gql/data/tugraph-db-example/gold_dev.txt --etype similarity
+```
+
+#### 3.5.2、语法正确性评估
+
+`tugraph-db-example`是符合`tugraph-db`的LCypher语法规则的语料数据集,语法正确性评估使用ANTLR4工具,基于`./dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.g4`文件生成了语法解析器,用于评估模型预测结果的语法正确性。
+
+```bash
+python dbgpt_hub_gql/eval/evaluation.py --input ./dbgpt_hub_gql/output/pred/tugraph_db_example_dev.txt --gold ./dbgpt_hub_gql/data/tugraph-db-example/gold_dev.txt --etype grammar --impl tugraph-db
+```
+
+### 3.6、模型权重合并
+
+如果你需要将训练的基础模型和微调的Peft模块的权重合并,导出一个完整的模型。则运行如下模型导出脚本:
+
+```bash
+sh dbgpt_hub_gql/scripts/export_merge.sh
+```
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/__init__.py
new file mode 100644
index 00000000..34e8a997
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/__init__.py
@@ -0,0 +1,9 @@
+from .data_args import DataArguments, Llama2Template
+from .model_args import ModelArguments, TrainingArguments
+
+__all__ = [
+ "DataArguments",
+ "Llama2Template",
+ "ModelArguments",
+ "TrainingArguments",
+]
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/config.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/config.py
new file mode 100644
index 00000000..a59315ed
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/config.py
@@ -0,0 +1,53 @@
+import os
+
+### path config
+ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+
+# ROOT_PATH = "/root/autodl-tmp"
+# MODELS_PARENT_PATH = "/home/model_files/codellama/"
+# DEFAULT_FT_MODEL_NAME = "CodeLlama-7b-Instruct-hf"
+MODELS_PARENT_PATH = "/home/model/"
+DEFAULT_FT_MODEL_NAME = "Baichuan2-13B-Chat"
+MODEL_PATH = os.path.join(MODELS_PARENT_PATH, DEFAULT_FT_MODEL_NAME)
+
+# MODEL_PATH = os.path.join(ROOT_PATH, "model")
+ADAPTER_PATH = os.path.join(ROOT_PATH, "dbgpt_hub_gql/output/adapter")
+MERGED_MODELS = os.path.join(ROOT_PATH, "dbgpt_hub_gql/output/merged_models")
+
+# DATA_PATH = "/root/autodl-tmp/data/spider/pre_processed_data"
+# OUT_DIR= "/root/autodl-tmp/codellama"
+
+DATA_PATH = os.path.join(ROOT_PATH, "dbgpt_hub_gql/data")
+PREDICTED_DATA_PATH = os.path.join(
+ ROOT_PATH, "dbgpt_hub_gql/data/tugraph-db-example/dev.json"
+)
+PREDICTED_OUT_FILENAME = "pred_gql.txt"
+# OUT_DIR = os.path.join(DATA_PATH, "out_pred")
+OUT_DIR = os.path.join(ROOT_PATH, "dbgpt_hub_gql/output/")
+
+## model constants
+IGNORE_INDEX = -100
+DEFAULT_PAD_TOKEN = "[PAD]"
+DEFAULT_EOS_TOKEN = ""
+DEFAULT_BOS_TOKEN = ""
+DEFAULT_UNK_TOKEN = ""
+
+
+LOG_FILE_NAME = "trainer_log.jsonl"
+
+# head_state_dict,model save name
+VALUE_HEAD_FILE_NAME = "value_head.bin"
+
+# output ,finetuning_args save_to_json name
+FINETUNING_ARGS_NAME = "finetuning_args.json"
+
+# when prepare_model_for_training ,layer_norm_names
+LAYERNORM_NAMES = ["norm", "ln_f", "ln_attn", "ln_mlp"]
+EXT2TYPE = {"csv": "csv", "json": "json", "jsonl": "json", "txt": "text"}
+
+INSTRUCTION_PROMPT = """\
+I want you to act as a GQL terminal in front of an example database, \
+you need only to return the gql command to me.Below is an instruction that describes a task, \
+Write a response that appropriately completes the request.\n"
+##Instruction:\n{}\n"""
+INPUT_PROMPT = "###Input:\n{}\n\n###Response:"
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/data_args.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/data_args.py
new file mode 100644
index 00000000..d54968f9
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/data_args.py
@@ -0,0 +1,406 @@
+import json
+import os
+from dataclasses import dataclass, field
+from typing import TYPE_CHECKING, Dict, List, Literal, Optional, Tuple, Union
+
+import tiktoken
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedTokenizer
+
+
+DEFAULT_PROMPT_DICT = {
+ "prompt_input": ("{instruction}\n\n{input}\n\n"),
+ "prompt_no_input": ("{instruction}\n\n"),
+}
+
+
+ALPACA_PROMPT_DICT = {
+ "prompt_input": (
+ "Below is an instruction that describes a task, paired with an input that provides further context. "
+ "Write a response that appropriately completes the request.\n\n"
+ "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response: "
+ ),
+ "prompt_no_input": (
+ "Below is an instruction that describes a task. "
+ "Write a response that appropriately completes the request.\n\n"
+ "### Instruction:\n{instruction}\n\n### Response: "
+ ),
+}
+
+SQL_PROMPT_DICT = {
+ "prompt_input": (
+ "I want you to act as a GQL terminal in front of an example database, \
+ you need only to return the gql command to me.Below is an instruction that describes a task, \
+ Write a response that appropriately completes the request.\n"
+ "##Instruction:\n{instruction}\n###Input:\n{input}\n\n###Response:"
+ ),
+ "prompt_no_input": (
+ "I want you to act as a GQL terminal in front of an example database, \
+ you need only to return the gql command to me.Below is an instruction that describes a task, \
+ Write a response that appropriately completes the request.\n"
+ "####Instruction:\n{instruction}\n\###Response: "
+ ),
+}
+
+
+@dataclass
+class DatasetAttr:
+ load_from: str
+ dataset_name: Optional[str] = None
+ dataset_sha1: Optional[str] = None
+ system_prompt: Optional[str] = None
+ stage: Optional[str] = None
+
+ def __repr__(self) -> str:
+ return self.dataset_name
+
+ def __post_init__(self):
+ self.prompt = "instruction"
+ self.query = "input"
+ self.response = "output"
+ self.history = None
+
+
+@dataclass
+class DataArguments:
+ r"""
+ Arguments pertaining to what data we are going to input our model for training and evaluation.
+ """
+ template: str = field(
+ metadata={
+ "help": "Which template to use for constructing prompts in training and inference."
+ }
+ )
+ dataset: Optional[str] = field(
+ default="tugraph_db_example_train",
+ metadata={
+ "help": "The name of provided dataset(s) to use. Use commas to separate multiple datasets."
+ },
+ )
+ dataset_dir: Optional[str] = field(
+ default="dbgpt_hub_gql/data/",
+ metadata={"help": "The name of the folder containing datasets."},
+ )
+ cutoff_len: Optional[int] = field(
+ default=1024,
+ metadata={"help": "The maximum length of the model inputs after tokenization."},
+ )
+ reserved_label_len: Optional[int] = field(
+ default=1,
+ metadata={"help": "The maximum length reserved for label after tokenization."},
+ )
+ split: Optional[str] = field(
+ default="train",
+ metadata={"help": "Which dataset split to use for training and evaluation."},
+ )
+ streaming: Optional[bool] = field(
+ default=False, metadata={"help": "Enable streaming mode."}
+ )
+ buffer_size: Optional[int] = field(
+ default=16384,
+ metadata={
+ "help": "Size of the buffer to randomly sample examples from in streaming mode."
+ },
+ )
+ mix_strategy: Optional[
+ Literal["concat", "interleave_under", "interleave_over"]
+ ] = field(default="concat", metadata={"help": "Strategy to use in dataset mixing."})
+ interleave_probs: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "Probabilities to sample data from datasets. Use commas to separate multiple datasets."
+ },
+ )
+ overwrite_cache: Optional[bool] = field(
+ default=False,
+ metadata={"help": "Overwrite the cached training and evaluation sets."},
+ )
+ preprocessing_num_workers: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of processes to use for the preprocessing."},
+ )
+ max_source_length: Optional[int] = field(
+ default=512,
+ metadata={
+ "help": "The maximum total input sequence length after tokenization."
+ },
+ )
+ max_target_length: Optional[int] = field(
+ default=512,
+ metadata={
+ "help": "The maximum total output sequence length after tokenization."
+ },
+ )
+ max_samples: Optional[int] = field(
+ default=None,
+ metadata={
+ "help": "For debugging purposes, truncate the number of examples for each dataset."
+ },
+ )
+ eval_num_beams: Optional[int] = field(
+ default=None,
+ metadata={
+ "help": "Number of beams to use for evaluation. This argument will be passed to `model.generate`"
+ },
+ )
+ ignore_pad_token_for_loss: Optional[bool] = field(
+ default=True,
+ metadata={
+ "help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
+ },
+ )
+ system_prompt: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "System prompt to add before the user query. Use `|` to separate multiple prompts in training."
+ },
+ )
+ val_size: Optional[float] = field(
+ default=0,
+ metadata={
+ "help": "Size of the development set, should be an integer or a float in range `[0,1)`."
+ },
+ )
+ predicted_input_filename: Optional[str] = field(
+ default="dbgpt_hub_gql/data/tugraph-db-example/dev.json",
+ metadata={"help": "Predict input filename to do pred "},
+ )
+ predicted_out_filename: Optional[str] = field(
+ default="pred_gql.txt",
+ metadata={"help": "Filename to save predicted outcomes"},
+ )
+
+ def init_for_training(self): # support mixing multiple datasets
+ dataset_names = [ds.strip() for ds in self.dataset.split(",")]
+ with open(os.path.join(self.dataset_dir, "dataset_info.json"), "r") as f:
+ dataset_info = json.load(f)
+
+ prompt_list = self.system_prompt.split("|") if self.system_prompt else [None]
+ prompt_list = prompt_list * (len(dataset_names) // len(prompt_list))
+ assert len(prompt_list) == len(
+ dataset_names
+ ), "Number of system prompts should be equal to datasets or 1."
+
+ if self.interleave_probs is not None:
+ self.interleave_probs = [
+ float(prob.strip()) for prob in self.interleave_probs.split(",")
+ ]
+
+ self.dataset_list: List[DatasetAttr] = []
+ for i, name in enumerate(dataset_names):
+ if name not in dataset_info:
+ raise ValueError(
+ "Undefined dataset {} in dataset_info.json.".format(name)
+ )
+
+ if "hf_hub_url" in dataset_info[name]:
+ dataset_attr = DatasetAttr(
+ "hf_hub",
+ dataset_name=dataset_info[name]["hf_hub_url"],
+ stage=dataset_info[name].get("stage", None),
+ )
+ elif "script_url" in dataset_info[name]:
+ dataset_attr = DatasetAttr(
+ "script",
+ dataset_name=dataset_info[name]["script_url"],
+ stage=dataset_info[name].get("stage", None),
+ )
+ else:
+ dataset_attr = DatasetAttr(
+ "file",
+ dataset_name=dataset_info[name]["file_name"],
+ dataset_sha1=dataset_info[name].get("file_sha1", None),
+ stage=dataset_info[name].get("stage", None),
+ )
+
+ if "columns" in dataset_info[name]:
+ dataset_attr.prompt = dataset_info[name]["columns"].get("prompt", None)
+ dataset_attr.query = dataset_info[name]["columns"].get("query", None)
+ dataset_attr.response = dataset_info[name]["columns"].get(
+ "response", None
+ )
+ dataset_attr.history = dataset_info[name]["columns"].get(
+ "history", None
+ )
+
+ dataset_attr.system_prompt = prompt_list[i]
+ self.dataset_list.append(dataset_attr)
+
+
+@dataclass
+class Template:
+ prefix: List[Union[str, Dict[str, str]]]
+ prompt: List[Union[str, Dict[str, str]]]
+ system: str
+ sep: List[Union[str, Dict[str, str]]]
+ stop_words: List[str]
+ use_history: bool
+
+ def encode_oneturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ query: str,
+ resp: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ ) -> Tuple[List[int], List[int]]:
+ r"""
+ Returns a single pair of token ids representing prompt and response respectively.
+ """
+ system, history = self._format(query, resp, history, system)
+ encoded_pairs = self._encode(tokenizer, system, history)
+ prompt_ids = []
+ for query_ids, resp_ids in encoded_pairs[:-1]:
+ prompt_ids = prompt_ids + query_ids + resp_ids
+ prompt_ids, answer_ids = prompt_ids + encoded_pairs[-1][0], encoded_pairs[-1][1]
+ return prompt_ids, answer_ids
+
+ def encode_multiturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ query: str,
+ resp: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ ) -> List[Tuple[List[int], List[int]]]:
+ r"""
+ Returns multiple pairs of token ids representing prompts and responses respectively.
+ """
+ system, history = self._format(query, resp, history, system)
+ encoded_pairs = self._encode(tokenizer, system, history)
+ return encoded_pairs
+
+ def _format(
+ self,
+ query: str,
+ resp: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ ) -> Tuple[str, List[Tuple[str, str]]]:
+ r"""
+ Aligns inputs to the standard format.
+ """
+ system = system or self.system # use system if provided
+ history = history if (history and self.use_history) else []
+ history = history + [(query, resp)]
+ return system, history
+
+ def _get_special_ids(
+ self, tokenizer: "PreTrainedTokenizer"
+ ) -> Tuple[List[int], List[int]]:
+ if tokenizer.bos_token_id is not None and getattr(
+ tokenizer, "add_bos_token", True
+ ): # baichuan-13b has no bos token
+ bos_ids = [tokenizer.bos_token_id]
+ else:
+ bos_ids = [] # bos token is optional
+
+ if tokenizer.eos_token_id is not None:
+ eos_ids = [tokenizer.eos_token_id]
+ else:
+ raise ValueError("EOS token is required.")
+
+ return bos_ids, eos_ids
+
+ def _encode(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ system: str,
+ history: List[Tuple[str, str]],
+ ) -> List[Tuple[List[int], List[int]]]:
+ r"""
+ Encodes formatted inputs to pairs of token ids.
+ Turn 0: bos + prefix + sep + query resp + eos
+ Turn t: sep + bos + query resp + eos
+ """
+ bos_ids, eos_ids = self._get_special_ids(tokenizer)
+ sep_ids = self._convert_inputs_to_ids(tokenizer, context=self.sep)
+ encoded_pairs = []
+ for turn_idx, (query, resp) in enumerate(history):
+ if turn_idx == 0:
+ prefix_ids = self._convert_inputs_to_ids(
+ tokenizer, context=self.prefix, system=system
+ )
+ if len(prefix_ids) != 0: # has prefix
+ prefix_ids = bos_ids + prefix_ids + sep_ids
+ else:
+ prefix_ids = bos_ids
+ else:
+ prefix_ids = sep_ids + bos_ids
+
+ query_ids = self._convert_inputs_to_ids(
+ tokenizer, context=self.prompt, query=query, idx=str(turn_idx)
+ )
+ resp_ids = self._convert_inputs_to_ids(tokenizer, context=[resp])
+ encoded_pairs.append((prefix_ids + query_ids, resp_ids + eos_ids))
+ return encoded_pairs
+
+ def _convert_inputs_to_ids(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ context: List[Union[str, Dict[str, str]]],
+ system: Optional[str] = None,
+ query: Optional[str] = None,
+ idx: Optional[str] = None,
+ ) -> List[int]:
+ r"""
+ Converts context to token ids.
+ """
+ if isinstance(
+ getattr(tokenizer, "tokenizer", None), tiktoken.Encoding
+ ): # for tiktoken tokenizer (Qwen)
+ kwargs = dict(allowed_special="all")
+ else:
+ kwargs = dict(add_special_tokens=False)
+
+ token_ids = []
+ for elem in context:
+ if isinstance(elem, str):
+ elem = (
+ elem.replace("{{system}}", system, 1)
+ if system is not None
+ else elem
+ )
+ elem = (
+ elem.replace("{{query}}", query, 1) if query is not None else elem
+ )
+ elem = elem.replace("{{idx}}", idx, 1) if idx is not None else elem
+ token_ids = token_ids + tokenizer.encode(elem, **kwargs)
+ elif isinstance(elem, dict):
+ token_ids = token_ids + [
+ tokenizer.convert_tokens_to_ids(elem.get("token"))
+ ]
+ else:
+ raise NotImplementedError
+
+ return token_ids
+
+
+@dataclass
+class Llama2Template(Template):
+ def _encode(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ system: str,
+ history: List[Tuple[str, str]],
+ ) -> List[Tuple[List[int], List[int]]]:
+ r"""
+ Encodes formatted inputs to pairs of token ids.
+ Turn 0: bos + prefix + query resp + eos
+ Turn t: bos + query resp + eos
+ """
+ bos_ids, eos_ids = self._get_special_ids(tokenizer)
+ encoded_pairs = []
+ for turn_idx, (query, resp) in enumerate(history):
+ if turn_idx == 0: # llama2 template has no sep_ids
+ query = self.prefix[0].replace("{{system}}", system) + query
+ query_ids = self._convert_inputs_to_ids(
+ tokenizer, context=self.prompt, query=query
+ )
+ resp_ids = self._convert_inputs_to_ids(tokenizer, context=[resp])
+ encoded_pairs.append((bos_ids + query_ids, resp_ids + eos_ids))
+ return encoded_pairs
+
+
+templates: Dict[str, Template] = {}
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config.json
new file mode 100644
index 00000000..e96d4d9b
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config.json
@@ -0,0 +1,23 @@
+{
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto",
+ "gradient_clipping": "auto",
+ "zero_allow_untested_optimizer": true,
+ "fp16": {
+ "enabled": "auto",
+ "loss_scale": 0,
+ "initial_scale_power": 16,
+ "loss_scale_window": 1000,
+ "hysteresis": 2,
+ "min_loss_scale": 1
+ },
+ "zero_optimization": {
+ "stage": 2,
+ "allgather_partitions": true,
+ "allgather_bucket_size": 5e8,
+ "reduce_scatter": true,
+ "reduce_bucket_size": 5e8,
+ "overlap_comm": false,
+ "contiguous_gradients": true
+ }
+ }
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config_stage3.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config_stage3.json
new file mode 100644
index 00000000..ea173478
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/ds_config_stage3.json
@@ -0,0 +1,32 @@
+{
+ "fp16": {
+ "enabled": "auto",
+ "loss_scale": 0,
+ "loss_scale_window": 1000,
+ "initial_scale_power": 16,
+ "hysteresis": 2,
+ "min_loss_scale": 1
+ },
+ "zero_optimization": {
+ "stage": 3,
+ "offload_optimizer": {
+ "device": "cpu",
+ "pin_memory": true
+ },
+ "offload_param": {
+ "device": "cpu",
+ "pin_memory": true
+ },
+ "overlap_comm": true,
+ "contiguous_gradients": true,
+ "stage3_max_live_parameters" : 1e9,
+ "stage3_max_reuse_distance" : 1e9,
+ "stage3_prefetch_bucket_size" : 5e8,
+ "stage3_param_persistence_threshold" : 1e6,
+ "sub_group_size" : 1e12,
+ "stage3_gather_16bit_weights_on_model_save": true
+ },
+ "train_batch_size": "auto",
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto"
+}
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/model_args.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/model_args.py
new file mode 100644
index 00000000..3d40a2a8
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/configs/model_args.py
@@ -0,0 +1,412 @@
+import json
+from dataclasses import asdict, dataclass, field
+from typing import Any, Dict, Literal, Optional
+
+import torch
+from transformers import Seq2SeqTrainingArguments
+
+from .config import ADAPTER_PATH
+
+
+@dataclass
+class ModelArguments:
+ r"""
+ Arguments pertaining to which model/config/tokenizer we are going to fine-tune.
+ """
+ model_name_or_path: str = field(
+ metadata={
+ "help": "Path to pretrained model or model identifier from huggingface.co/models."
+ }
+ )
+ cache_dir: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "Where to store the pretrained models downloaded from huggingface.co."
+ },
+ )
+ use_fast_tokenizer: Optional[bool] = field(
+ default=False,
+ metadata={
+ "help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."
+ },
+ )
+ use_auth_token: Optional[bool] = field(
+ default=False,
+ metadata={
+ "help": "Will use the token generated when running `huggingface-cli login`."
+ },
+ )
+ model_revision: Optional[str] = field(
+ default="main",
+ metadata={
+ "help": "The specific model version to use (can be a branch name, tag name or commit id)."
+ },
+ )
+ padding_side: Optional[Literal["left", "right"]] = field(
+ default="left",
+ metadata={"help": "The side on which the model should have padding applied."},
+ )
+ quantization_bit: Optional[int] = field(
+ default=None, metadata={"help": "The number of bits to quantize the model."}
+ )
+ quantization_type: Optional[Literal["fp4", "nf4"]] = field(
+ default="nf4",
+ metadata={"help": "Quantization data type to use in int4 training."},
+ )
+ double_quantization: Optional[bool] = field(
+ default=True,
+ metadata={
+ "help": "Whether to use double quantization in int4 training or not."
+ },
+ )
+ rope_scaling: Optional[Literal["linear", "dynamic"]] = field(
+ default=None, metadata={"help": "Adopt scaled rotary positional embeddings."}
+ )
+ checkpoint_dir: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "Path to the directory(s) containing the delta model checkpoints as well as the configurations."
+ },
+ )
+ # reward_model: Optional[str] = field(
+ # default=None,
+ # metadata={"help": "Path to the directory containing the checkpoints of the reward model."}
+ # )
+ plot_loss: Optional[bool] = field(
+ default=False,
+ metadata={
+ "help": "Whether to plot the training loss after fine-tuning or not."
+ },
+ )
+ hf_auth_token: Optional[str] = field(
+ default=None, metadata={"help": "Auth token to log in with Hugging Face Hub."}
+ )
+ compute_dtype: Optional[torch.dtype] = field(
+ default=None,
+ metadata={
+ "help": "Used in quantization configs. Do not specify this argument manually."
+ },
+ )
+ model_max_length: Optional[int] = field(
+ default=None,
+ metadata={
+ "help": "Used in rope scaling. Do not specify this argument manually."
+ },
+ )
+ hf_hub_token: Optional[str] = field(
+ default=None, metadata={"help": "Auth token to log in with Hugging Face Hub."}
+ )
+ split_special_tokens: Optional[bool] = field(
+ default=False,
+ metadata={
+ "help": "Whether or not the special tokens should be split during the tokenization process."
+ },
+ )
+
+ def __post_init__(self):
+ if self.compute_dtype is not None or self.model_max_length is not None:
+ raise ValueError("These arguments cannot be specified.")
+
+ if self.checkpoint_dir is not None: # support merging multiple lora weights
+ self.checkpoint_dir = [cd.strip() for cd in self.checkpoint_dir.split(",")]
+
+ if self.quantization_bit is not None:
+ assert self.quantization_bit in [
+ 4,
+ 8,
+ ], "We only accept 4-bit or 8-bit quantization."
+
+ if self.use_auth_token == True and self.hf_auth_token is not None:
+ from huggingface_hub.hf_api import HfFolder # lazy load
+
+ HfFolder.save_token(self.hf_auth_token)
+
+
+@dataclass
+class GeneratingArguments:
+ r"""
+ Arguments pertaining to specify the decoding parameters.
+ """
+ do_sample: Optional[bool] = field(
+ default=True,
+ metadata={
+ "help": "Whether or not to use sampling, use greedy decoding otherwise."
+ },
+ )
+ temperature: Optional[float] = field(
+ default=0.95,
+ metadata={"help": "The value used to modulate the next token probabilities."},
+ )
+ top_p: Optional[float] = field(
+ default=0.7,
+ metadata={
+ "help": "The smallest set of most probable tokens with probabilities that add up to top_p or higher are kept."
+ },
+ )
+ top_k: Optional[int] = field(
+ default=50,
+ metadata={
+ "help": "The number of highest probability vocabulary tokens to keep for top-k filtering."
+ },
+ )
+ num_beams: Optional[int] = field(
+ default=1,
+ metadata={"help": "Number of beams for beam search. 1 means no beam search."},
+ )
+ max_length: Optional[int] = field(
+ default=None,
+ metadata={
+ "help": "The maximum length the generated tokens can have. It can be overridden by max_new_tokens."
+ },
+ )
+ max_new_tokens: Optional[int] = field(
+ default=512,
+ metadata={
+ "help": "The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt."
+ },
+ )
+ repetition_penalty: Optional[float] = field(
+ default=1.0,
+ metadata={
+ "help": "The parameter for repetition penalty. 1.0 means no penalty."
+ },
+ )
+ length_penalty: Optional[float] = field(
+ default=1.0,
+ metadata={
+ "help": "Exponential penalty to the length that is used with beam-based generation."
+ },
+ )
+
+ def to_dict(self) -> Dict[str, Any]:
+ args = asdict(self)
+ if args.get("max_new_tokens", None):
+ args.pop("max_length", None)
+ return args
+
+
+@dataclass
+class FinetuningArguments:
+ r"""
+ Arguments pertaining to which techniques we are going to fine-tuning with.
+ """
+ stage: Optional[Literal["sft", "rm"]] = field(
+ default="sft", metadata={"help": "Which stage will be performed in training."}
+ )
+ finetuning_type: Optional[Literal["lora", "freeze", "full", "none"]] = field(
+ default="lora", metadata={"help": "Which fine-tuning method to use."}
+ )
+ num_hidden_layers: Optional[int] = field(
+ default=32,
+ metadata={
+ "help": 'Number of decoder blocks in the model for partial-parameter (freeze) fine-tuning. \
+ LLaMA choices: ["32", "40", "60", "80"], \
+ LLaMA-2 choices: ["32", "40", "80"], \
+ BLOOM choices: ["24", "30", "70"], \
+ Falcon choices: ["32", "60"], \
+ Baichuan choices: ["32", "40"] \
+ Qwen choices: ["32"], \
+ XVERSE choices: ["40"], \
+ ChatGLM2 choices: ["28"],\
+ ChatGLM3 choices: ["28"]'
+ },
+ )
+ num_layer_trainable: Optional[int] = field(
+ default=3,
+ metadata={
+ "help": "Number of trainable layers for partial-parameter (freeze) fine-tuning."
+ },
+ )
+ name_module_trainable: Optional[
+ Literal["mlp", "self_attn", "self_attention"]
+ ] = field(
+ default="mlp",
+ metadata={
+ "help": 'Name of trainable modules for partial-parameter (freeze) fine-tuning. \
+ LLaMA choices: ["mlp", "self_attn"], \
+ BLOOM & Falcon & ChatGLM2 & ChatGLM3choices: ["mlp", "self_attention"], \
+ Baichuan choices: ["mlp", "self_attn"], \
+ Qwen choices: ["mlp", "attn"], \
+ LLaMA-2, InternLM, XVERSE choices: the same as LLaMA.'
+ },
+ )
+ lora_rank: Optional[int] = field(
+ default=8, metadata={"help": "The intrinsic dimension for LoRA fine-tuning."}
+ )
+ lora_alpha: Optional[float] = field(
+ default=32.0,
+ metadata={
+ "help": "The scale factor for LoRA fine-tuning (similar with the learning rate)."
+ },
+ )
+ lora_dropout: Optional[float] = field(
+ default=0.1, metadata={"help": "Dropout rate for the LoRA fine-tuning."}
+ )
+ lora_target: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": 'Name(s) of target modules to apply LoRA. Use commas to separate multiple modules. \
+ LLaMA choices: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], \
+ BLOOM & Falcon & ChatGLM2 & ChatGLM3 choices: ["query_key_value", "self_attention.dense", "mlp.dense"], \
+ Baichuan choices: ["W_pack", "o_proj", "gate_proj", "up_proj", "down_proj"], \
+ Qwen choices: ["c_attn", "attn.c_proj", "w1", "w2", "mlp.c_proj"], \
+ LLaMA-2, InternLM, XVERSE choices: the same as LLaMA.'
+ },
+ )
+ resume_lora_training: Optional[bool] = field(
+ default=True,
+ metadata={
+ "help": "Whether to resume training from the last LoRA weights or create new weights after merging them."
+ },
+ )
+ ppo_score_norm: Optional[bool] = field(
+ default=False, metadata={"help": "Use score normalization in PPO Training."}
+ )
+ dpo_beta: Optional[float] = field(
+ default=0.1, metadata={"help": "The beta parameter for the DPO loss."}
+ )
+
+ def __post_init__(self):
+ if isinstance(
+ self.lora_target, str
+ ): # support custom target modules/layers of LoRA
+ self.lora_target = [
+ target.strip() for target in self.lora_target.split(",")
+ ]
+
+ if (
+ self.num_layer_trainable > 0
+ ): # fine-tuning the last n layers if num_layer_trainable > 0
+ trainable_layer_ids = [
+ self.num_hidden_layers - k - 1 for k in range(self.num_layer_trainable)
+ ]
+ else: # fine-tuning the first n layers if num_layer_trainable < 0
+ trainable_layer_ids = [k for k in range(-self.num_layer_trainable)]
+
+ self.trainable_layers = [
+ "{:d}.{}".format(idx, self.name_module_trainable)
+ for idx in trainable_layer_ids
+ ]
+
+ assert self.finetuning_type in [
+ "lora",
+ "freeze",
+ "full",
+ "none",
+ ], "Invalid fine-tuning method."
+
+ def save_to_json(self, json_path: str):
+ r"""Saves the content of this instance in JSON format inside `json_path`."""
+ json_string = json.dumps(asdict(self), indent=2, sort_keys=True) + "\n"
+ with open(json_path, "w", encoding="utf-8") as f:
+ f.write(json_string)
+
+ @classmethod
+ def load_from_json(cls, json_path: str):
+ r"""Creates an instance from the content of `json_path`."""
+ with open(json_path, "r", encoding="utf-8") as f:
+ text = f.read()
+ return cls(**json.loads(text))
+
+
+@dataclass
+class TrainingArguments(Seq2SeqTrainingArguments):
+ cache_dir: Optional[str] = field(default=None)
+ train_on_source: Optional[bool] = field(
+ default=False,
+ metadata={
+ "help": "Whether to train on the input in addition to the target text."
+ },
+ )
+ full_finetune: bool = field(
+ default=False, metadata={"help": "Finetune the entire model without adapters."}
+ )
+ do_train: bool = field(
+ default=True,
+ metadata={"help": "To train or not to train, that is the question?"},
+ )
+ sample_generate: bool = field(
+ default=False, metadata={"help": "If do sample generation on evaluation."}
+ )
+ optim: str = field(
+ default="paged_adamw_32bit", metadata={"help": "The optimizer to be used"}
+ )
+ max_grad_norm: float = field(
+ default=0.3,
+ metadata={
+ "help": "Gradient clipping max norm. This is tuned and works well for all models tested."
+ },
+ )
+ gradient_checkpointing: bool = field(
+ default=True,
+ metadata={"help": "Use gradient checkpointing. You want to use this."},
+ )
+ predict_with_generate: bool = field(
+ default=False,
+ metadata={
+ "help": "Group sequences into batches with same length. Saves memory and speeds up training considerably."
+ },
+ )
+ model_max_length: int = field(
+ default=2048,
+ metadata={
+ "help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."
+ },
+ )
+ output_dir: str = field(
+ default=ADAPTER_PATH,
+ metadata={"help": "The output dir for logs and checkpoints"},
+ )
+ per_device_train_batch_size: int = field(
+ default=1,
+ metadata={
+ "help": "The training batch size per GPU. Increase for better speed."
+ },
+ )
+ gradient_accumulation_steps: int = field(
+ default=16,
+ metadata={
+ "help": "How many gradients to accumulate before to perform an optimizer step"
+ },
+ )
+ max_steps: int = field(
+ default=10000, metadata={"help": "How many optimizer update steps to take"}
+ )
+ # use lora dropout instead for regularization if needed
+ weight_decay: float = field(
+ default=0.0, metadata={"help": "The L2 weight decay rate of AdamW"}
+ )
+ learning_rate: float = field(default=0.0002, metadata={"help": "The learnign rate"})
+ remove_unused_columns: bool = field(
+ default=False,
+ metadata={"help": "Removed unused columns. Needed to make this codebase work."},
+ )
+ lr_scheduler_type: str = field(
+ default="constant",
+ metadata={
+ "help": "Learning rate schedule. Constant a bit better than cosine, and has advantage for analysis"
+ },
+ )
+ warmup_ratio: float = field(
+ default=0.03, metadata={"help": "Fraction of steps to do a warmup for"}
+ )
+ logging_steps: int = field(
+ default=10,
+ metadata={"help": "The frequency of update steps after which to log the loss"},
+ )
+ group_by_length: bool = field(
+ default=True,
+ metadata={
+ "help": "Group sequences into batches with same length. Saves memory and speeds up training considerably."
+ },
+ )
+ save_strategy: str = field(
+ default="steps", metadata={"help": "When to save checkpoints"}
+ )
+ save_steps: int = field(default=250, metadata={"help": "How often to save a model"})
+ save_total_limit: int = field(
+ default=40,
+ metadata={
+ "help": "How many checkpoints to save before the oldest is overwritten"
+ },
+ )
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/dataset_info.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/dataset_info.json
new file mode 100644
index 00000000..0cf3102b
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/dataset_info.json
@@ -0,0 +1,29 @@
+{
+ "tugraph_analytics_train": {
+ "file_name": "./tugraph-analytics/train.json",
+ "columns": {
+ "prompt": "instruction",
+ "query": "input",
+ "response": "output",
+ "history": "history"
+ }
+ },
+ "tugraph_db_train": {
+ "file_name": "./tugraph-db/train.json",
+ "columns": {
+ "prompt": "instruction",
+ "query": "input",
+ "response": "output",
+ "history": "history"
+ }
+ },
+ "tugraph_db_example_train": {
+ "file_name": "./tugraph-db-example/train.json",
+ "columns": {
+ "prompt": "instruction",
+ "query": "input",
+ "response": "output",
+ "history": "history"
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/dev.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/dev.json
new file mode 100644
index 00000000..e3684ced
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/dev.json
@@ -0,0 +1,261 @@
+[
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找符合(michael{id:\"2654\"})-[]->()模式的节点和关系,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"2654\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到所有通过directed类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:directed]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式查找图数据库中是朋友关系用户id为\"662\"的节点m的用户有哪些?返回a节点,m节点",
+ "output": "OPTIONAL MATCH (m:user{id:\"662\"})<-[:is_friend]-(a:user) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "在图中查找所有通过person_person类型关系连接的节点对n和m,并返回这些节点对以及它们之间的person_person关系。",
+ "output": "MATCH (n)-[e:person_person]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "寻找Michael评分低于3星的电影,以及同样给这些电影评分低于3星的其他用户,并查找这些用户评分高于3星的其他电影。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得人物关系姓名为\"魏成\",介绍为\"数学天才,和申玉菲只是形式上的婚姻,申玉菲想通过他的数学能力,解决三体问题\"的人m的人有哪些?返回节点a,节点m",
+ "output": "MATCH (m:person{name:\"魏成\",introduce:\"数学天才,和申玉菲只是形式上的婚姻,申玉菲想通过他的数学能力,解决三体问题\"})<-[:person_person]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "匹配一个id为\"2654\"的节点,然后查找该节点通过任何关系指向的节点,返回这个路径p。",
+ "output": "MATCH p=(node{id:\"579\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中符合(michael{name:\"银河纪元\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p,跳过前10条数据",
+ "output": "MATCH p=(michael{name:\"银河纪元\"})-[]->() RETURN p SKIP 10",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找所有通过person_plan类型关系连接的节点对n和m,返回对应的n,人物和计划的关系,m",
+ "output": "MATCH (n)-[e:person_plan]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "参演Forrest Gump电影的有哪些人?",
+ "output": "MATCH (m:movie {title: 'Forrest Gump'})<-[:acted_in]-(a:person) RETURN a, m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中流派名称为\"Mystery\"的流派m的电影有哪些?返回a节点,m节点",
+ "output": "MATCH (m:genre{name:\"Mystery\"})<-[:has_genre]-(a:movie) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中参演标题为\"Forrest Gump\"的电影m的人有哪些?返回a节点,m节点",
+ "output": "MATCH (m:movie{title:\"Forrest Gump\"})<-[:acted_in]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将11.5转为浮点数并返回。",
+ "output": "RETURN toFloat('11.5')",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找用户名为Michael的用户u,找到所有与u是朋友关系的用户v,并且这些用户v对某部电影m的评分r.stars大于3,返回用户u、用户v和电影m的相关信息。",
+ "output": "MATCH (u:user {login: 'Michael'})-[:is_friend]->(v:user)-[r:rate]->(m:movie) WHERE r.stars > 3 RETURN u, v, m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中写标题为\"12 Angry Men\"的电影m的作者都有谁?返回a节点,m节点",
+ "output": "MATCH (m:movie{title:\"12 Angry Men\"})<-[:write]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询所有通过directed类型关系连接的节点对n和m,返回对应的n,导演,m,跳过第一条数据",
+ "output": "MATCH (n)-[e:directed]-(m) RETURN n,e,m SKIP 1",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找数据库中符合(michael{id:\"579\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"579\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中的组织n,返回n节点的name属性值,n节点的name属性值,按照节点的name属性升序排列返回的结果",
+ "output": "MATCH (n:organization) RETURN n.name,n.name ORDER BY n.name",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得\"弗雷德里克·泰勒相关的组织有哪些?返回m节点的name属性值,边r",
+ "output": "MATCH (u:person{name:\"弗雷德里克·泰勒\"})-[r:person_organization]->(m:organization) RETURN DISTINCT m.name,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询与标题为\"Gravity\"的电影p有关的人有哪些?返回子图,跳过前7条数据",
+ "output": "MATCH (p:movie{id:\"1293\",title:\"Gravity\"})-[e]-(neighbor:person) RETURN p,e,neighbor SKIP 7",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得给id为\"3915\"的电影m的打分的用户有哪些?返回相应的电影和用户",
+ "output": "MATCH (m:movie{id:\"3915\"})<-[:rate]-(a:user) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询id为517的电影u的流派有哪些?返回m节点的id属性值,r边",
+ "output": "MATCH (u:movie{id:\"517\"})-[r:has_genre]->(m:genre) RETURN DISTINCT m.id,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找姓名为Laurence Fishburne和姓名为Carrie-Anne Moss的人,返回他们各自的出生日期和出生日期相差的绝对值。",
+ "output": "MATCH (a:person {name: 'Laurence Fishburne'}),(e:person {name: 'Carrie-Anne Moss'}) RETURN a.born, e.born, abs(a.born-e.born)",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查找所有带有person标签的节点,按照name属性值进行升序排序,跳过排序后的第一个结果,返回接下来的两个结果的name属性值。",
+ "output": "MATCH (n:person) RETURN n.name ORDER BY n.name SKIP 1 LIMIT 2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得所有通过has_genre类型关系连接的节点对n和m,返回对应的n,有流派,m",
+ "output": "MATCH (n)-[e:has_genre]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询与叶文洁关联的人物有关的人物,返回子图。",
+ "output": "MATCH (a:person {name: \"叶文洁\"})-[e1:person_person]->(n)-[e2:person_person]->(b:person) RETURN a,b,n,e1,e2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得id为\"2432\",名字为\"Pat Morita\"的人u写的电影有哪些?返回m节点的tagline属性值,边r",
+ "output": "MATCH (u:person{id:\"2432\",name:\"Pat Morita\"})-[r:write]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "找到人物叶文洁,设置她的介绍信息为清华大学教授、ETO精神领袖、首位和三体人交流的人,返回此节点。",
+ "output": "MATCH (p:person {name: \"叶文洁\"}) SET p.introduce = \"清华大学教授、ETO精神领袖、首位和三体人交流的人\" RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将'Anne', 'John', 'Bill', 'Diane', 'Eve'作为临时结果集names,从names中检索数据,并返回其中的第二到四个元素。",
+ "output": "WITH ['Anne', 'John', 'Bill', 'Diane', 'Eve'] AS names RETURN names[1..3] AS result",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找与id是\"187\"的用户是朋友关系的用户有哪些?返回m节点的id属性值,r边",
+ "output": "MATCH (u:user{id:\"187\"})-[r:is_friend]->(m:user) RETURN m.id,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找所有带有person标签的节点的name属性值。查找所有带有movie标签的节点的title属性值。将两个查询的结果合并在一起返回重命名为name,且不去除重复项。",
+ "output": "MATCH (n:person) RETURN n.name AS name UNION ALL MATCH (n:movie) RETURN n.title AS name",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "将返回图中所有通过person_person关系相连的节点和关系。",
+ "output": "MATCH (n)-[e:person_person]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "Michael的朋友们评分大于3分的电影有哪些?返回相关信息。",
+ "output": "MATCH (u:user {login: 'Michael'})-[:is_friend]->(v:user)-[r:rate]->(m:movie) WHERE r.stars > 3 RETURN u, v, m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将'one','two','three'作为临时结果集coll,返回coll和coll的第一个元素。",
+ "output": "WITH ['one','two','three'] AS coll RETURN coll, head(coll)",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找用户名为’Michael’的用户u,找到u评分小于3的电影m。接着找到所有也对电影m评分小于3的用户v。然后,从这些用户v中找到对另一部电影m2评分大于3的情况。返回用户u、电影m、用户v和电影m2的相关信息。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询所有通过rate类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:rate]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得符合(michael{id:\"1245\",name:\"Alan Tudyk\"})-[]->()模式的节点和关系,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"1245\",name:\"Alan Tudyk\"})-[]->() RETURN p",
+ "history": []
+ }
+]
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_dev.txt b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_dev.txt
new file mode 100644
index 00000000..662be49f
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_dev.txt
@@ -0,0 +1,37 @@
+MATCH p=(michael{id:"2654"})-[]->() RETURN p
+MATCH (n)-[e:directed]-(m) RETURN n,e,m
+OPTIONAL MATCH (m:user{id:"662"})<-[:is_friend]-(a:user) RETURN a,m
+MATCH (n)-[e:person_person]-(m) RETURN n,e,m
+MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2
+MATCH (m:person{name:"魏成",introduce:"数学天才,和申玉菲只是形式上的婚姻,申玉菲想通过他的数学能力,解决三体问题"})<-[:person_person]-(a:person) RETURN a,m
+MATCH p=(node{id:"579"})-[]->() RETURN p
+MATCH p=(michael{name:"银河纪元"})-[]->() RETURN p SKIP 10
+MATCH (n)-[e:person_plan]-(m) RETURN n,e,m
+MATCH (m:movie {title: 'Forrest Gump'})<-[:acted_in]-(a:person) RETURN a, m
+MATCH (m:genre{name:"Mystery"})<-[:has_genre]-(a:movie) RETURN a,m
+MATCH (m:movie{title:"Forrest Gump"})<-[:acted_in]-(a:person) RETURN a,m
+RETURN toFloat('11.5')
+MATCH (u:user {login: 'Michael'})-[:is_friend]->(v:user)-[r:rate]->(m:movie) WHERE r.stars > 3 RETURN u, v, m
+MATCH (m:movie{title:"12 Angry Men"})<-[:write]-(a:person) RETURN a,m
+MATCH (n)-[e:directed]-(m) RETURN n,e,m SKIP 1
+MATCH p=(michael{id:"579"})-[]->() RETURN p
+MATCH (n:organization) RETURN n.name,n.name ORDER BY n.name
+MATCH (u:person{name:"弗雷德里克·泰勒"})-[r:person_organization]->(m:organization) RETURN DISTINCT m.name,r
+MATCH (p:movie{id:"1293",title:"Gravity"})-[e]-(neighbor:person) RETURN p,e,neighbor SKIP 7
+MATCH (m:movie{id:"3915"})<-[:rate]-(a:user) RETURN a,m
+MATCH (u:movie{id:"517"})-[r:has_genre]->(m:genre) RETURN DISTINCT m.id,r
+MATCH (a:person {name: 'Laurence Fishburne'}),(e:person {name: 'Carrie-Anne Moss'}) RETURN a.born, e.born, abs(a.born-e.born)
+MATCH (n:person) RETURN n.name ORDER BY n.name SKIP 1 LIMIT 2
+MATCH (n)-[e:has_genre]-(m) RETURN n,e,m
+MATCH (a:person {name: "叶文洁"})-[e1:person_person]->(n)-[e2:person_person]->(b:person) RETURN a,b,n,e1,e2
+MATCH (u:person{id:"2432",name:"Pat Morita"})-[r:write]->(m:movie) RETURN m.tagline,r
+MATCH (p:person {name: "叶文洁"}) SET p.introduce = "清华大学教授、ETO精神领袖、首位和三体人交流的人" RETURN p
+WITH ['Anne', 'John', 'Bill', 'Diane', 'Eve'] AS names RETURN names[1..3] AS result
+MATCH (u:user{id:"187"})-[r:is_friend]->(m:user) RETURN m.id,r
+MATCH (n:person) RETURN n.name AS name UNION ALL MATCH (n:movie) RETURN n.title AS name
+MATCH (n)-[e:person_person]-(m) RETURN n,e,m
+MATCH (u:user {login: 'Michael'})-[:is_friend]->(v:user)-[r:rate]->(m:movie) WHERE r.stars > 3 RETURN u, v, m
+WITH ['one','two','three'] AS coll RETURN coll, head(coll)
+MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2
+MATCH (n)-[e:rate]-(m) RETURN n,e,m
+MATCH p=(michael{id:"1245",name:"Alan Tudyk"})-[]->() RETURN p
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_test.txt b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_test.txt
new file mode 100644
index 00000000..db6db3c6
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/gold_test.txt
@@ -0,0 +1,37 @@
+MATCH (p:time{name:"DX3906星系黑域纪元"})-[e]-(neighbor:time) RETURN p,e,neighbor SKIP 4
+MATCH (n) RETURN n
+MATCH (p:person{name:"托马斯·维德"})-[e]-(neighbor:plan) RETURN p,e,neighbor
+MATCH (m:time{name:"威慑纪元",time:"公元2208年-2270年"})<-[:time_time]-(a:time) RETURN a,m
+OPTIONAL MATCH (p:movie{tagline:"Everything is Connected"})-[e]-(neighbor:genre) RETURN p,e,neighbor
+MATCH (n)-[e:person_person]-(m) RETURN n,e,m
+MATCH (u:person{born:"1965"})-[r:acted_in]->(m:movie) RETURN m.tagline,r
+MATCH (n:person) RETURN n.name ORDER BY n.name SKIP 1 LIMIT 2
+MATCH (m:movie{title:"Beowulf"})<-[:produce]-(a:person) RETURN a,m SKIP 4
+MATCH (n)-[e:person_plan]-(m) RETURN n,e,m
+MATCH (n) RETURN n
+OPTIONAL MATCH p=(michael{id:"1935",title:"The Croods"})-[]->() RETURN p
+MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie) WHERE r.stars < 3 RETURN m.title, r.stars
+MATCH p=(michael{id:"1105",name:"Hans Matheson"})-[]->() RETURN p
+MATCH p = (laurence {name: 'Laurence Fishburne'})-[:acted_in*2]-(co_actor) RETURN p
+MATCH (p:plan {name: "面壁计划"})-[e]-(neighbor:person) RETURN neighbor,p,e
+MATCH (n)-[e:organization_plan]-(m) RETURN n,e,m
+MATCH (n)-[e:produce]-(m) RETURN n,e,m
+MATCH (u:movie{tagline:"The Legend Ends"})-[r:has_keyword]->(m:keyword) RETURN m.id,r SKIP 1
+MATCH (m:organization{name:"智子"})<-[:organization_organization]-(a:organization) RETURN a,m
+MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v) WHERE r.stars < 3 AND s.stars < 3 RETURN u, m, v
+MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2
+MATCH (n)-[e:person_organization]-(m) RETURN n,e,m
+MATCH (m:movie {title:'Forrest Gump'})<-[r:rate]-(u:user)-[r2:rate]->(m2:movie) WHERE r.stars>3 AND r2.stars>3 RETURN m, u,m2
+OPTIONAL MATCH (p:time{name:"危机纪元"})-[e]-(neighbor:time) RETURN p,e,neighbor
+MATCH (m:organization{name:"PIA"})<-[:person_organization]-(a:person) RETURN a,m
+MATCH p=(node{id:"579"})-[]->() RETURN p
+MATCH (n:Account{id:4817170722885471951}) RETURN n.createTime as createTime, n.isBlocked as isBlocked, n.type as type
+MATCH (p:organization{name:"PDC"})-[e]-(neighbor:organization) RETURN p,e,neighbor
+OPTIONAL MATCH (m:plan{name:"古筝行动"})<-[:organization_plan]-(a:organization) RETURN a,m
+OPTIONAL MATCH (n:Account{id:177335832397484055})<-[e:transfer]-(m:Account) WHERE e.amount > 0.0 AND e.timestamp > 627020616747 AND e.timestamp < 1669690342640 AND m.isBlocked=true WITH count(m) * 1.0 as numM OPTIONAL MATCH (n:Account{id:177335832397484055})<-[e:transfer]-(m:Account) WITH count(m) as numIn, numM RETURN CASE WHEN numIn = 0 THEN -1 ELSE round(numM / numIn * 1000) / 1000 END as blockRatio
+MATCH (n)-[e:organization_plan]-(m) RETURN n,e,m
+MATCH (a:person {name: "叶文洁"})-[e1:person_person]->(n)<-[e2:person_person]-(b:person {name: "汪淼"}) RETURN a,b,n,e1,e2
+MATCH (matrix:movie {title: 'The Matrix'})-[*0..1]-(x) RETURN x
+OPTIONAL MATCH p=(michael{name:"太阳系",introduce:""})-[]->() RETURN p
+MATCH (p:movie{id:"805",title:"Star Wars: Episode IV - A New Hope"})-[e]-(neighbor:person) RETURN DISTINCT p,e,neighbor
+MATCH (n:person) RETURN DISTINCT n.name,n.id ORDER BY person.id DESC
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/test.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/test.json
new file mode 100644
index 00000000..abec9203
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/test.json
@@ -0,0 +1,261 @@
+[
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得与姓名为\"DX3906星系黑域纪元\"的时间p有关的时间有哪些?返回相应的节点和关系,跳过前4条数据",
+ "output": "MATCH (p:time{name:\"DX3906星系黑域纪元\"})-[e]-(neighbor:time) RETURN p,e,neighbor SKIP 4",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "在数据库中的所有节点中,跳过第一个节点,并返回接下来最多100个节点。",
+ "output": "MATCH (n) RETURN n",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找数据库中与人名为\"托马斯·维德\"的节点p有关的计划有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:person{name:\"托马斯·维德\"})-[e]-(neighbor:plan) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找数据库中名称为\"威慑纪元\",时间为\"公元2208年-2270年\"的时间线m有关的时间线有哪些?返回节点a,节点m",
+ "output": "MATCH (m:time{name:\"威慑纪元\",time:\"公元2208年-2270年\"})<-[:time_time]-(a:time) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式查找数据库中与标签为\"Everything is Connected\"的电影p有关的流派有哪些?返回相应的节点和关系",
+ "output": "OPTIONAL MATCH (p:movie{tagline:\"Everything is Connected\"})-[e]-(neighbor:genre) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中所有通过person_person类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:person_person]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询出生日期为\"1965\"的人u参演的电影有哪些?返回m节点的tagline属性值,r边",
+ "output": "MATCH (u:person{born:\"1965\"})-[r:acted_in]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查询标签为person的节点的相关信息,结果按id降序排序,保留去除第1个数据后的2个数据。",
+ "output": "MATCH (n:person) RETURN n.name ORDER BY n.name SKIP 1 LIMIT 2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到出版标题为\"Beowulf\"的电影m的人有哪些?返回a节点,m节点,跳过前4条数据",
+ "output": "MATCH (m:movie{title:\"Beowulf\"})<-[:produce]-(a:person) RETURN a,m SKIP 4",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找数据库中所有通过person_plan类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:person_plan]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "返回数据库中的所有节点。",
+ "output": "MATCH (n) RETURN n",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式在图中查找符合(michael{id:\"1935\",title:\"The Croods\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "OPTIONAL MATCH p=(michael{id:\"1935\",title:\"The Croods\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找用户Michael评分低于3星的电影,返回电影的标题和评分。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie) WHERE r.stars < 3 RETURN m.title, r.stars",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找符合(michael{id:\"1105\",name:\"Hans Matheson\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"1105\",name:\"Hans Matheson\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到与姓名为Laurence Fishburne合作过的演员,并返回这些路径。",
+ "output": "MATCH p = (laurence {name: 'Laurence Fishburne'})-[:acted_in*2]-(co_actor) RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "与面壁计划有关的人有哪些?",
+ "output": "MATCH (p:plan {name: \"面壁计划\"})-[e]-(neighbor:person) RETURN neighbor,p,e",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询所有通过organization_plan类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:organization_plan]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找所有通过produce类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:produce]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找标签为\"The Legend Ends\"的电影u的关键字有哪些?返回m节点的id属性值,r边,跳过第一条数据",
+ "output": "MATCH (u:movie{tagline:\"The Legend Ends\"})-[r:has_keyword]->(m:keyword) RETURN m.id,r SKIP 1",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询与\"智子\"的组织m有关的组织有哪些?返回节点a,节点m",
+ "output": "MATCH (m:organization{name:\"智子\"})<-[:organization_organization]-(a:organization) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询和用户Michael有共同讨厌的电影的人,返回Michael、用户和电影。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v) WHERE r.stars < 3 AND s.stars < 3 RETURN u, m, v",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "那些跟用户Michael有相同电影品味的人喜欢的电影有哪些?返回相关信息。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找所有通过person_organization类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:person_organization]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询给电影Forrest Gump打高分的的用户给其他电影打分超过3星的电影,返回电影m,用户u和打分大于3星的电影m2。",
+ "output": "MATCH (m:movie {title:'Forrest Gump'})<-[r:rate]-(u:user)-[r2:rate]->(m2:movie) WHERE r.stars>3 AND r2.stars>3 RETURN m, u,m2",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "以可选的方式找到与名为\"危机纪元\"的时间p有关的时间有哪些?返回相应的节点和关系",
+ "output": "OPTIONAL MATCH (p:time{name:\"危机纪元\"})-[e]-(neighbor:time) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中与名称为\"PIA\"的组织m有出边关系的人哪些?返回节点a,节点m",
+ "output": "MATCH (m:organization{name:\"PIA\"})<-[:person_organization]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从id为579的node节点到其他节点的所有直接相连的路径。",
+ "output": "MATCH p=(node{id:\"579\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找一个id是4817170722885471951的账户的createTime、isBlocked和type属性。",
+ "output": "MATCH (n:Account{id:4817170722885471951}) RETURN n.createTime as createTime, n.isBlocked as isBlocked, n.type as type",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找与名字为\"PDC\"的组织p有关的组织有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:organization{name:\"PDC\"})-[e]-(neighbor:organization) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "以可选的方式查找数据库中与名称为\"古筝行动\"的计划m有关的组织有哪些?返回节点a,节点m",
+ "output": "OPTIONAL MATCH (m:plan{name:\"古筝行动\"})<-[:organization_plan]-(a:organization) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询账户id为177335832397484055的账户在627020616747-1669690342640时间范围内,来自黑账户的转入量占其所有转入量的比例。返回该比例。如果给定账户没有转入记录,则返回-1。",
+ "output": "OPTIONAL MATCH (n:Account{id:177335832397484055})<-[e:transfer]-(m:Account) WHERE e.amount > 0.0 AND e.timestamp > 627020616747 AND e.timestamp < 1669690342640 AND m.isBlocked=true WITH count(m) * 1.0 as numM OPTIONAL MATCH (n:Account{id:177335832397484055})<-[e:transfer]-(m:Account) WITH count(m) as numIn, numM RETURN CASE WHEN numIn = 0 THEN -1 ELSE round(numM / numIn * 1000) / 1000 END as blockRatio",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询所有通过organization_plan类型关系连接的节点对n和m,返回对应的n,组织和计划间的关系,m",
+ "output": "MATCH (n)-[e:organization_plan]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询叶文洁和汪淼这两个人之间的的共同关联的人物都有谁。",
+ "output": "MATCH (a:person {name: \"叶文洁\"})-[e1:person_person]->(n)<-[e2:person_person]-(b:person {name: \"汪淼\"}) RETURN a,b,n,e1,e2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到电影The Matrix之间有零到一个关系的所有节点,包括电影节点自身和与之直接关联的节点。",
+ "output": "MATCH (matrix:movie {title: 'The Matrix'})-[*0..1]-(x) RETURN x",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "以可选的方式查找数据库中符合(michael{name:\"太阳系\",introduce:\"\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "OPTIONAL MATCH p=(michael{name:\"太阳系\",introduce:\"\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找数据库中与标题为\"Star Wars: Episode IV - A New Hope\"的电影p有关的人有哪些?返回子图",
+ "output": "MATCH (p:movie{id:\"805\",title:\"Star Wars: Episode IV - A New Hope\"})-[e]-(neighbor:person) RETURN DISTINCT p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找人n,返回n节点的name属性值,n节点的id属性值,按照节点的id属性降序排列返回的结果",
+ "output": "MATCH (n:person) RETURN DISTINCT n.name,n.id ORDER BY person.id DESC",
+ "history": []
+ }
+]
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/train.json b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/train.json
new file mode 100644
index 00000000..eceed90d
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data/tugraph-db-example/train.json
@@ -0,0 +1,765 @@
+[
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "找所有name属性值大于或等于'Peter'的节点,并返回这些节点的name 和 born属性值。",
+ "output": "MATCH (a) WHERE a.name >= 'Peter' RETURN a.name, a.born",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询姓名为\"量子号\",介绍为\"炮灰\"的组织u具有组织和计划间的关系的计划有哪些?返回m节点的name属性值,边r",
+ "output": "MATCH (u:organization{name:\"量子号\",introduce:\"炮灰\"})-[r:organization_plan]->(m:plan) RETURN m.name,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找与登录用户名为\"Stewart\"的用户p有关的电影有哪些?返回子图",
+ "output": "MATCH (p:user{id:\"399\",login:\"Stewart\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询所有通过has_keyword类型关系连接的节点对n和m,返回对应的n,关键字边,m",
+ "output": "MATCH (n)-[e:has_keyword]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "筛选出name属性值大于或等于'Peter'的节点a,返回节点a的name和born属性值。",
+ "output": "MATCH (a) WHERE a.name >= 'Peter' RETURN a.name, a.born",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "找到数据库中具有euid=0_3937_0_0_0的关系,并返回其相连的节点对。",
+ "output": "MATCH (a)-[r]->(b) WHERE euid(r)=\"0_3937_0_0_0\" RETURN a,b",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将2.3转为整数后返回,并重命名为integer。",
+ "output": "RETURN toInteger('2.3') AS integer",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得与名为\"神迹计划\"的计划p有关的组织有哪些?返回对应的计划,e,组织",
+ "output": "MATCH (p:plan{name:\"神迹计划\"})-[e]-(neighbor:organization) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中符合(michael{id:\"1102\",name:\"Jack O'Connell\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"1102\",name:\"Jack O'Connell\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查找具有belt属性值为'white'或belt属性为空的所有节点,并返回这些节点的name、age和belt属性值,同时按照name属性值升序排列返回的结果。",
+ "output": "MATCH (n) WHERE n.belt = 'white' OR n.belt IS NULL RETURN n.name, n.age, n.belt ORDER BY n.name",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得名称为\"叶哲泰\",介绍为\"物理学教授,在文革中因为坚持真理被学生打死\"的人相关的计划有哪些?返回m节点的name属性值,边r",
+ "output": "MATCH (u:person{name:\"叶哲泰\",introduce:\"物理学教授,在文革中因为坚持真理被学生打死\"})-[r:person_plan]->(m:plan) RETURN DISTINCT m.name,r",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查找所有belt属性值为'white' 或belt属性为空的节点,返回这些节点的name、age和belt属性值,按照节点的name属性值升序排列返回的结果。",
+ "output": "MATCH (n) WHERE n.belt = 'white' OR n.belt IS NULL RETURN n.name, n.age, n.belt ORDER BY n.name",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "筛选出名称在字典序上等于或在'Peter'之后的所有节点,返回所有节点的name和born属性值。",
+ "output": "MATCH (a) WHERE a.name >= 'Peter' RETURN a.name, a.born",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找流派n,返回n节点的name属性值,n节点的name属性值,同时按照节点的name属性升序排序",
+ "output": "MATCH (n:genre) RETURN n.name,n.name ORDER BY n.name",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找姓名为\"威慑纪元\",时间为\"公元2208年-2270年\"的时间u具有时间关系的时间有哪些?返回m节点的name属性值,边r",
+ "output": "MATCH (u:time{name:\"威慑纪元\",time:\"公元2208年-2270年\"})-[r:time_time]->(m:time) RETURN m.name,r",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找与名字为\"关一帆\"的人p有关的人有哪些?返回对应的人,e,人",
+ "output": "MATCH (p:person{name:\"关一帆\"})-[e]-(neighbor:person) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得流派id是\"610\"的电影有哪些?返回a节点,m节点",
+ "output": "MATCH (m:genre{id:\"610\"})<-[:has_genre]-(a:movie) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "找到\"掩体计划\"的计划m有关的人有哪些?返回节点a,节点m",
+ "output": "MATCH (m:plan{name:\"掩体计划\"})<-[:person_plan]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中,返回节点a的name属性值,并将该值重命名为name,保留前670条数据",
+ "output": "MATCH (a{name:\"蓝星\"}) RETURN a.name AS name LIMIT 670",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找数据库中的关键字n,返回n节点的id属性值,n节点的name属性值,同时按照节点的id属性排序",
+ "output": "MATCH (n:keyword) RETURN n.id,n.name ORDER BY keyword.id",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将'one','two','three'作为临时结果集coll,返回coll和coll的最后一个元素。",
+ "output": "WITH ['one','two','three'] AS coll RETURN coll, last(coll)",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "找到符合(michael{name:\"叶文洁\",introduce:\"清华大学教授、ETO精神领袖\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "MATCH p=(michael{name:\"叶文洁\",introduce:\"清华大学教授、ETO精神领袖\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找,返回a节点的name属性值,并将该值重命名为name",
+ "output": "MATCH (a{name:\"人类时代\",time:\"公元1453年-1980年\",important_events:\"叶文洁出生、红岸基地成立\"}) RETURN a.name AS name",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到与Laurence Fishburne有两步acted_in关系的共同演员,返回这些路径。",
+ "output": "MATCH p = (laurence {name: 'Laurence Fishburne'})-[:acted_in*2]-(co_actor) RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询所有通过time_time类型关系连接的节点对n和m,返回对应的n,时间上的关系,m",
+ "output": "MATCH (n)-[e:time_time]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询Michael讨厌的电影,返回电影的标题和评分。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie) WHERE r.stars < 3 RETURN m.title, r.stars",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "在图中查找的时间n,返回n节点的important_events属性值,n节点的name属性值,同时按照节点的important_events属性升序排序",
+ "output": "MATCH (n:time) RETURN DISTINCT n.important_events,n.name ORDER BY n.important_events",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找名称为\"Chiwetel Ejiofor\",出生年份为\"1977\"的人u出版的电影有哪些?返回m节点的tagline属性值,边r",
+ "output": "MATCH (u:person{name:\"Chiwetel Ejiofor\",born:\"1977\"})-[r:produce]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "创建一个人物节点,其介绍为物理学教授,姓名为bbb,返回此节点。",
+ "output": "CREATE (a:person {introduce: \"物理学教授\", name: \"bbb\"}) RETURN a",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中姓名为\"二向箔\"的组织u具有组织间的关系的组织有哪些?返回m节点的introduce属性值,边r",
+ "output": "MATCH (u:organization{name:\"二向箔\"})-[r:organization_organization]->(m:organization) RETURN m.introduce,r",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找数据库中节点u人\"智子\"人物间关系有哪些?返回有关节点的introduce属性值,边r",
+ "output": "MATCH (u:person{name:\"智子\"})-[r:person_person]->(m:person) RETURN m.introduce,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式从数据库中查找与标题为\"The Avengers\"的电影p有关的人有哪些?返回对应的电影,e,人",
+ "output": "OPTIONAL MATCH (p:movie{id:\"2055\",title:\"The Avengers\"})-[e]-(neighbor:person) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询图数据库中与Michael Douglas有直接出边关系的路径。",
+ "output": "MATCH p = (michael {name: 'Michael Douglas'})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找与姓名为\"Talia Shire\"的人p有关的电影有哪些?返回子图",
+ "output": "MATCH (p:person{id:\"482\",name:\"Talia Shire\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "返回数据库中的所有匹配到的节点。",
+ "output": "MATCH (n) RETURN n",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "给定一个src账户260083977992279253,在startTime1627020616747=和endTime=1669690342640之间的特定时间范围内,找到从src到dst的所有金额超过阈值的转出(edge),返回转出次数和金额总和。",
+ "output": "MATCH (n:Account{id:260083977992279253}) WITH n MATCH (n)-[e:transfer]->(m:Account) WHERE e.amount > 0.0 AND e.timestamp > 1627020616747 AND e.timestamp < 1669690342640 WITH m.id as dstId, count(e) as numEdges, sum(e.amount) as sumAmount RETURN dstId, numEdges, round(sumAmount * 1000) / 1000 as sumAmount ORDER BY sumAmount DESC, dstId ASC",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找所有通过has_keyword类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:has_keyword]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "找到符合(a)-[r]->(b)关系,过滤条件是 euid(r)的值必须等于0_3937_0_0_0,返回所有符合条件的a,b节点。",
+ "output": "MATCH (a)-[r]->(b) WHERE euid(r)=\"0_3937_0_0_0\" RETURN a,b",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找数据库中符合(michael{name:\"申玉菲\"})-[]->()模式的节点和关系,返回匹配的链路p",
+ "output": "MATCH p=(michael{name:\"申玉菲\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中与id为\"2239\"的人p有关的电影有哪些?返回子图",
+ "output": "MATCH (p:person{id:\"2239\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找给标签为\"Meet The Original Force Of Nature\"的电影m打分的用户有哪些?返回a节点,m节点",
+ "output": "MATCH (m:movie{tagline:\"Meet The Original Force Of Nature\"})<-[:rate]-(a:user) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询所有参演Forrest Gump电影的演员。",
+ "output": "MATCH (m:movie {title: 'Forrest Gump'})<-[:acted_in]-(a:person) RETURN a, m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找与标签为\"It was the greatest art heist in history\"的电影p有关的关键字有哪些?返回子图",
+ "output": "MATCH (p:movie{tagline:\"It was the greatest art heist in history\"})-[e]-(neighbor:keyword) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找账户4779452526250437773,在startTime1627020616747=和endTime=1669690342640之间的时间范围内,其他账户向该账户转入金额超过100的转账记录。返回转入的数量和金额总和。",
+ "output": "MATCH (n:Account{id:4779452526250437773})<-[e:transfer]-(m:Account) WHERE e.amount > 100.0 AND e.timestamp > 1627020616747 AND e.timestamp < 1669690342640 WITH m.id as srcId, count(e) as numEdges, sum(e.amount) as sumAmount RETURN srcId, numEdges, round(sumAmount * 1000) / 1000 as sumAmount ORDER BY sumAmount DESC, srcId ASC",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得所有通过person_organization类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:person_organization]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到电影The Matrix节点以及与其有直接关系的所有节点,并返回这些节点。",
+ "output": "MATCH (matrix:movie {title: 'The Matrix'})-[*0..1]-(x) RETURN x",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中与出生日期为\"1943\"的人p有关的电影有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:person{id:\"483\",name:\"Gianni Russo\",born:\"1943\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找符合(node{id:\"579\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p并返回。",
+ "output": "MATCH p=(node{id:\"579\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查询图数据库中名称为Carrie-Anne Moss的节点,并返回该节点的 born 属性值,且将该值重命名为SomethingTotallyDifferent。",
+ "output": "MATCH (a {name: 'Carrie-Anne Moss'}) RETURN a.born AS SomethingTotallyDifferent",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中符合(michael{name:\"量子号\",introduce:\"炮灰\"})-[]->()模式的节点和关系,返回匹配的链路p",
+ "output": "MATCH p=(michael{name:\"量子号\",introduce:\"炮灰\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询与用户u有相似低评分的电影偏好,但对其他电影有高评分的用户。返回相关的用户和电影。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v)-[r2:rate]->(m2:movie) WHERE r.stars < 3 AND s.stars < 3 AND r2.stars > 3 RETURN u, m, v, m2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中的用户n,返回n节点的id属性值,n节点的id属性值,按照节点的id属性升序排列返回的结果",
+ "output": "MATCH (n:user) RETURN n.id,n.id ORDER BY user.id",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式从数据库中查找所有通过rate类型关系连接的节点对n和m,返回对应的n,打分,m",
+ "output": "OPTIONAL MATCH (n)-[e:rate]-(m) RETURN DISTINCT n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询图数据库中与Michael Douglas有直接出边关系的路径。",
+ "output": "MATCH p = (michael {name: 'Michael Douglas'})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找,返回节点a的name属性值,并将该值重命名为name",
+ "output": "MATCH (a{name:\"阶梯计划\"}) RETURN a.name AS name",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "节点n,返回n.name,只保留前10项。",
+ "output": "MATCH (n) RETURN n.name LIMIT 10",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "从图数据库中查找所有带有person标签的节点,并按照节点的name属性值排序,跳过第一个结果,返回接下来的两个结果name属性值。",
+ "output": "MATCH (n:person) RETURN n.name ORDER BY n.name SKIP 1 LIMIT 2",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找,返回a节点的introduce属性值,并将该值重命名为introduce",
+ "output": "MATCH (a{name:\"叶哲泰\"}) RETURN a.introduce AS introduce",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找符合(michael{id:\"21\"})-[]->()模式的节点和关系,将匹配到的路径赋值给变量p,返回匹配的链路p",
+ "output": "MATCH p=(michael{id:\"21\"})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到与Michael Douglas有直接出边关系的所有路径,返回这些路径。",
+ "output": "MATCH p = (michael {name: 'Michael Douglas'})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中所有通过write类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:write]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中所有通过acted_in类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:acted_in]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式查找数据库中的电影n,返回n节点的title属性值,n节点的id属性值,同时按照节点的title属性升序排序",
+ "output": "OPTIONAL MATCH (n:movie) RETURN n.title,n.id ORDER BY movie.title",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到有关键字id为\"3388\"的关键字模板m的电影有哪些?返回a节点,m节点",
+ "output": "MATCH (m:keyword{id:\"3388\"})<-[:has_keyword]-(a:movie) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图数据库中创建一个带有person标签的新节点,并为该节点设置id属性为2001,name属性值为Anders,完成创建后,返回该节点。",
+ "output": "CREATE (n:person {id:2002, name: 'Andres'}) RETURN n",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找数据库中与名字为\"Paul Norell\"的人p有关的电影有哪些?返回对应的人,e,电影",
+ "output": "MATCH (p:person{id:\"660\",name:\"Paul Norell\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "创建一个带有person标签的新节点,并为该节点设置id属性为2002和name属性为Anders。创建完成后,返回新节点的详细信息。",
+ "output": "CREATE (n:person {id:2002, name: 'Andres'}) RETURN n",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找符合(michael{name:\"伊万\",introduce:\"万有引力号维护工程师\"})-[]->()模式的节点和关系,返回匹配的链路p,跳过前3条数据",
+ "output": "MATCH p=(michael{name:\"伊万\",introduce:\"万有引力号维护工程师\"})-[]->() RETURN p SKIP 3",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找与id为\"610\"的流派p有关的电影有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:genre{id:\"610\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找的计划n,返回n节点的content属性值,n节点的content属性值,同时按照节点的content属性降序排序",
+ "output": "MATCH (n:plan) RETURN n.content,n.content ORDER BY n.content DESC",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询和用户Michael给同一部电影打分小于3星的用户,返回Michael、用户和电影的信息组成的子图。",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie)<-[s:rate]-(v) WHERE r.stars < 3 AND s.stars < 3 RETURN u, m, v",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找所有通过has_genre类型关系连接的节点对n和m,返回对应的n,流派e,m",
+ "output": "MATCH (n)-[e:has_genre]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "将'Anne', 'John', 'Bill', 'Diane', 'Eve'作为临时结果集names,从names中检索数据,并返回names列表中的第二到四个元素。",
+ "output": "WITH ['Anne', 'John', 'Bill', 'Diane', 'Eve'] AS names RETURN names[1..3] AS result",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "Michael讨厌的电影有哪些?",
+ "output": "MATCH (u:user {login: 'Michael'})-[r:rate]->(m:movie) WHERE r.stars < 3 RETURN m.title, r.stars",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询谁在电影Forrest Gump中扮演了什么角色,返回人的名字和扮演的角色。",
+ "output": "MATCH (m:movie {title: 'Forrest Gump'})<-[r:acted_in]-(a:person) RETURN a.name,r.role",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "在图中查找所有通过is_friend类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:is_friend]-(m) RETURN DISTINCT n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查询边为人物关系的节点和路径。",
+ "output": "MATCH (n)-[e:person_person]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "首先查找所有向src账户id是243478878511713854转过账的中间账户m,然后找到m的转账对象中是黑户的账号dst,且src和dst账号id不同,以上转账记录时间在1627020616747到1669690342640范围内,返回去重后的dst账号id并排序。",
+ "output": "MATCH (src:Account{id:243478878511713854})<-[e1:transfer]-(m:Account)-[e2:transfer]->(dst:Account) WHERE dst.isBlocked = true AND src.id <> dst.id AND e1.timestamp > 1627020616747 AND e1.timestamp < 1669690342640 AND e2.timestamp > 1627020616747 AND e2.timestamp < 1669690342640 RETURN DISTINCT dst.id as dstId ORDER BY dstId ASC",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到与Michael Douglas有直接出边关系的所有路径,返回这些路径。",
+ "output": "MATCH p = (michael {name: 'Michael Douglas'})-[]->() RETURN p",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到所有通过acted_in类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:acted_in]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到所有通过write类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:write]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "登录用户是Michaeld的朋友喜爱的电影有哪些?返回相关信息。",
+ "output": "MATCH (u:user {login: 'Michael'})-[:is_friend]->(v:user)-[r:rate]->(m:movie) WHERE r.stars > 3 RETURN u, v, m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "创建一个标签为person的节点,其id属性为2001,name属性值为Anders。",
+ "output": "CREATE (n:person {id:2001, name: 'Andres'})",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得出生年份为1977的Chiwetel Ejiofor出版的电影m有哪些?返回m节点的tagline属性值和连接关系。",
+ "output": "MATCH (u:person{name:\"Chiwetel Ejiofor\",born:\"1977\"})-[r:produce]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找名称为Carrie-Anne Moss的节点,返回该节点的born属性值,并将其重命名为SomethingTotallyDifferent。",
+ "output": "MATCH (a {name: 'Carrie-Anne Moss'}) RETURN a.born AS SomethingTotallyDifferent",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "找到电影The Matrix节点以及与其有直接关系的所有节点,并返回这些节点。",
+ "output": "MATCH (matrix:movie {title: 'The Matrix'})-[*0..1]-(x) RETURN x",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中与姓名为\"PDC\"的组织p有关的计划有哪些?返回对应的组织,e,计划",
+ "output": "MATCH (p:organization{name:\"PDC\"})-[e]-(neighbor:plan) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "从图数据库中查找属性belt为white或为空的节点n,返回这些节点n的name、age和belt属性,同时按照节点的name属性排序。",
+ "output": "MATCH (n) WHERE n.belt = 'white' OR n.belt IS NULL RETURN n.name, n.age, n.belt ORDER BY n.name",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查找名称为Carrie-Anne Moss的节点,返回该节点的born属性值,并将其重命名为SomethingTotallyDifferent。",
+ "output": "MATCH (a {name: 'Carrie-Anne Moss'}) RETURN a.born AS SomethingTotallyDifferent",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "从数据库中查找所有通过organization_organization类型关系连接的节点对n和m,返回相应的节点和关系",
+ "output": "MATCH (n)-[e:organization_organization]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "查找图数据库中具有euid=0_3937_0_0_0关系,并返回与这些关系相关联的两个节点。",
+ "output": "MATCH (a)-[r]->(b) WHERE euid(r)=\"0_3937_0_0_0\" RETURN a,b",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得与标签为\"The extraordinary true story of Solomon Northup\"的电影p有关的用户有哪些?返回相应的节点和关系,跳过前9条数据",
+ "output": "MATCH (p:movie{id:\"1693\",title:\"12 Years a Slave\",tagline:\"The extraordinary true story of Solomon Northup\"})-[e]-(neighbor:user) RETURN p,e,neighbor SKIP 9",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中导演id为\"1\"的电影m的人有哪些?返回a节点,m节点",
+ "output": "MATCH (m:movie{id:\"1\"})<-[:directed]-(a:person) RETURN a,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得所有通过produce类型关系连接的节点对n和m,返回对应的n,出版,m",
+ "output": "MATCH (n)-[e:produce]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "common",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\n\n\n",
+ "input": "找到符合(a)-[r]->(b)关系,过滤条件是 euid(r)的值必须等于0_3937_0_0_0,返回所有符合条件的a,b节点。",
+ "output": "MATCH (a)-[r]->(b) WHERE euid(r)=\"0_3937_0_0_0\" RETURN a,b",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "查找图数据库中与名字为\"PIA\"的组织p有关的人有哪些?返回子图",
+ "output": "MATCH (p:organization{name:\"PIA\"})-[e]-(neighbor:person) RETURN DISTINCT p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "在图中查找与名为\"雾伞计划\"的计划p有关的人有哪些?返回对应的计划,e,人",
+ "output": "MATCH (p:plan{name:\"雾伞计划\"})-[e]-(neighbor:person) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中名为\"Pat Morita\"的人u写的电影有哪些?返回m节点的tagline属性值和连接关系。",
+ "output": "MATCH (u:person{name:\"Pat Morita\"})-[r:write]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查询与标题为\"The Monuments Men\"的电影p有关的人有哪些?返回子图",
+ "output": "MATCH (p:movie{id:\"1356\",title:\"The Monuments Men\"})-[e]-(neighbor:person) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "以可选的方式找到登录用户名为\"Jacob\"的用户u打分的电影有哪些?返回m节点的tagline属性值,r边",
+ "output": "OPTIONAL MATCH (u:user{login:\"Jacob\"})-[r:rate]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得所有通过is_friend类型关系连接的节点对n和m,返回子图",
+ "output": "MATCH (n)-[e:is_friend]-(m) RETURN n,e,m",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中带有person标签的节点的name属性值和带有movie标签的节点的title属性值,将这些结果合并在一起返回,并将合并后的列名统一为 name。",
+ "output": "MATCH (n:person) RETURN n.name AS name UNION ALL MATCH (n:movie) RETURN n.title AS name",
+ "history": []
+ },
+ {
+ "db_id": "finbench",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nfinbench包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "根据账户id4817170722885471951,找到1627020616747-1669690342640时间段内的所有转入和转出关系,返回转入和转出各自的总金额和最大金额,如果转入或转出边不存在,则返回-1作为最大金额值。",
+ "output": "MATCH (n:Account{id:%d}) WITH n OPTIONAL MATCH (n)-[e:transfer]->(m:Account) WHERE e.timestamp > 1627020616747 AND e.timestamp < 1669690342640 WITH n, sum(e.amount) as sumEdge1Amount, max(e.amount) as maxEdge1Amount, count(e) as numEdge1 OPTIONAL MATCH (n)<-[e:transfer]-(m:Account) WHERE e.timestamp > 1627020616747 AND e.timestamp < 1669690342640 WITH sumEdge1Amount, maxEdge1Amount, numEdge1, sum(e.amount) as sumEdge2Amount, max(e.amount) as maxEdge2Amount, count(e) as numEdge2 RETURN round(sumEdge1Amount * 1000) / 1000 as sumEdge1Amount, CASE WHEN maxEdge1Amount < 0 THEN -1 ELSE round(maxEdge1Amount * 1000) / 1000 END as maxEdge1Amount, numEdge1, round(sumEdge2Amount * 1000) / 1000 as sumEdge2Amount, CASE WHEN maxEdge2Amount < 0 THEN -1 ELSE round(maxEdge2Amount * 1000) / 1000 END as maxEdge2Amount, numEdge2",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "从数据库中查找与名字为\"motion capture\"的关键字p有关的电影有哪些?返回子图",
+ "output": "MATCH (p:keyword{id:\"2587\",name:\"motion capture\"})-[e]-(neighbor:movie) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中与用户id为\"240\"的节点p有关的用户有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:user{id:\"240\"})-[e]-(neighbor:user) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "在图中查找与名字为\"PIA\"的组织p有关的组织有哪些?返回相应的节点和关系",
+ "output": "MATCH (p:organization{name:\"PIA\"})-[e]-(neighbor:organization) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "获得与id为\"39\"的用户p有关的用户有哪些?返回子图",
+ "output": "MATCH (p:user{id:\"39\"})-[e]-(neighbor:user) RETURN p,e,neighbor",
+ "history": []
+ },
+ {
+ "db_id": "movie",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nmovie包含节点person、genre、keyword、movie、user和边acted_in、rate、directed、is_friend、has_genre、has_keyword、produce、write。节点person有属性id、name、born、poster_image。节点genre有属性id、name。节点keyword有属性id、name。节点movie有属性id、title、tagline、summary、poster_image、duration、rated。节点user有属性id、login。边acted_in有属性role。边rate有属性stars。\n\n",
+ "input": "查找图数据库中名为\"David Wenham\"的人u导演的电影有哪些?返回m节点的tagline属性值和连接关系。",
+ "output": "MATCH (u:person{name:\"David Wenham\"})-[r:directed]->(m:movie) RETURN m.tagline,r",
+ "history": []
+ },
+ {
+ "db_id": "the_three_body",
+ "instruction": "我希望你像一个Tugraph数据库前端一样工作,你只需要返回给我cypher语句。下面是一条描述任务的指令,写一条正确的response来完成这个请求.\n\"\n##Instruction:\nthe_three_body包含节点person、organization、plan、time和边person_person、person_plan、person_organization、organization_plan、organization_organization、time_time。节点person有属性name、introduce。节点organization有属性name、introduce。节点plan有属性name、content。节点time有属性name、time、important_events。边person_person有属性relation。边person_plan有属性relation。边person_organization有属性relation。边organization_plan有属性relation。边organization_organization有属性relation。边time_time有属性relation。\n\n",
+ "input": "获得的人n,返回n节点的introduce属性值,n节点的introduce属性值,同时按照节点的introduce属性排序",
+ "output": "MATCH (n:person) RETURN n.introduce,n.introduce ORDER BY n.introduce",
+ "history": []
+ }
+]
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/__init__.py
new file mode 100644
index 00000000..8affaf70
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/__init__.py
@@ -0,0 +1,4 @@
+"""
+dbgpt_hub.data_process
+==============
+"""
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/data_utils.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/data_utils.py
new file mode 100644
index 00000000..735c467d
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/data_process/data_utils.py
@@ -0,0 +1,1010 @@
+import hashlib
+import os
+from itertools import chain
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Dict,
+ Generator,
+ List,
+ Literal,
+ Optional,
+ Tuple,
+ Union,
+)
+
+import numpy as np
+import pandas as pd
+import tiktoken
+from datasets import (
+ Dataset,
+ DatasetDict,
+ concatenate_datasets,
+ interleave_datasets,
+ load_dataset,
+)
+
+from ..configs.config import EXT2TYPE, IGNORE_INDEX
+from ..configs.data_args import (
+ ALPACA_PROMPT_DICT,
+ DEFAULT_PROMPT_DICT,
+ SQL_PROMPT_DICT,
+ Llama2Template,
+ Template,
+)
+
+if TYPE_CHECKING:
+ pass
+
+from dbgpt_hub_gql.llm_base.loggings import get_logger
+
+logger = get_logger(__name__)
+
+
+def extract_default_prompt_dataset(example: Dict[str, Any]) -> Dict[str, str]:
+ # Not random, use pre-defined templates
+ if example.get("input", "") != "":
+ prompt_template = DEFAULT_PROMPT_DICT["prompt_input"]
+ else:
+ prompt_template = DEFAULT_PROMPT_DICT["prompt_no_input"]
+
+ # Format prompt with example
+ formated_prompt = prompt_template.format(**example)
+
+ return {"input": formated_prompt}
+
+
+def extract_alpaca_prompt_dataset(example: Dict[str, Any]) -> Dict[str, str]:
+ if example.get("input", "") != "":
+ prompt_format = ALPACA_PROMPT_DICT["prompt_input"]
+ else:
+ prompt_format = ALPACA_PROMPT_DICT["prompt_no_input"]
+ return {"input": prompt_format.format(**example)}
+
+
+def extract_sql_prompt_dataset(example: Dict[str, Any]) -> Dict[str, str]:
+ if example.get("input", "") != "":
+ prompt_format = SQL_PROMPT_DICT["prompt_input"]
+ else:
+ prompt_format = SQL_PROMPT_DICT["prompt_no_input"]
+ return {"input": prompt_format.format(**example)}
+
+
+def infer_max_len(
+ source_len: int, target_len: int, data_args: "DataArguments"
+) -> Tuple[int, int]:
+ max_target_len = int(
+ data_args.cutoff_len * (target_len / (source_len + target_len))
+ )
+ max_target_len = max(max_target_len, data_args.reserved_label_len)
+ max_source_len = data_args.cutoff_len - max_target_len
+ return max_source_len, max_target_len
+
+
+def local_dataset(
+ dataset_path: str, eval_dataset_size: float = 0.1
+) -> Tuple[Dataset, Dataset]:
+ """
+ Reads in a dataset from a file and returns it as a split train-test dataset.
+
+ Args:
+ dataset_path (str): The name of the dataset file to read in. \
+ The format is inferred based on the file extension.
+
+ Returns:
+ A tuple containing two datasets - the training subset and the testing subset.
+ Raises:
+ ValueError: If the specified file format is unsupported.
+
+ """
+
+ # Read in the full dataset from file based on the file format
+ if dataset_path.endswith(".json"):
+ full_dataset = load_dataset("json", data_files=dataset_path)
+ elif dataset_path.endswith(".jsonl"):
+ full_dataset = load_dataset("json", data_files=dataset_path)
+ elif dataset_path.endswith(".csv"):
+ full_dataset = Dataset.from_pandas(pd.read_csv(dataset_path))
+ elif dataset_path.endswith(".tsv"):
+ full_dataset = Dataset.from_pandas(pd.read_csv(dataset_path, delimiter="\t"))
+ else:
+ raise ValueError(f"Unsupported dataset format: {dataset_path}")
+ if "train" not in full_dataset:
+ split_dataset = full_dataset.train_test_split(test_size=eval_dataset_size)
+ return split_dataset
+ else:
+ return full_dataset
+
+
+def load_data(
+ dataset_path: str, eval_dataset_size: float = 0.1
+) -> Union[Dict[str, Dataset], None]:
+ """
+ Load a dataset based on its name.
+
+ Args:
+ dataset_path: A string representing the path to the dataset to be loaded.
+
+ Returns:
+ A dictionary containing the loaded dataset if the dataset exists.
+ None if the dataset does not exist.
+
+ Raises:
+ NotImplementedError: If the dataset name provided is not implemented yet or if
+ the dataset is not released.
+
+ Examples:
+ >>> load_data('alpaca')
+ {'train': Dataset(...), 'validation': Dataset(...), 'test': Dataset(...)}
+
+ """
+ if not os.path.exists(dataset_path):
+ # Download dataset from HuggingFace Datasets
+ print(
+ f"Lodding dataset from huggingface, please ref to https://huggingface.co/datasets/{dataset_path}"
+ )
+ dataset = load_dataset(dataset_path, cache_dir="~/.cache/huggingface/datasets")
+ return dataset
+ else:
+ # Load dataset from local file
+ try:
+ print(f"Lodding dataset from local path: {dataset_path}")
+ dataset = local_dataset(dataset_path, eval_dataset_size)
+ return dataset
+ except:
+ raise ValueError(f"Error loading dataset from {dataset_path}")
+
+
+templates: Dict[str, Template] = {}
+
+
+def get_template_and_fix_tokenizer(
+ name: str, tokenizer: "PreTrainedTokenizer"
+) -> Template:
+ template = templates.get(name, None)
+ assert template is not None, "Template {} does not exist.".format(name)
+
+ additional_special_tokens = template.stop_words
+
+ if tokenizer.eos_token_id is None:
+ tokenizer.eos_token = "<|endoftext|>"
+ logger.info("Add eos token: {}".format(tokenizer.eos_token))
+
+ if tokenizer.pad_token_id is None:
+ if tokenizer.unk_token_id is not None:
+ tokenizer.pad_token = tokenizer.unk_token
+ else:
+ tokenizer.pad_token = tokenizer.eos_token
+ logger.info("Add pad token: {}".format(tokenizer.pad_token))
+
+ if name is None:
+ return None
+
+ tokenizer.add_special_tokens(
+ dict(additional_special_tokens=additional_special_tokens),
+ replace_additional_special_tokens=False,
+ )
+ return template
+
+
+def register_template(
+ name: str,
+ prefix: List[Union[str, Dict[str, str]]],
+ prompt: List[Union[str, Dict[str, str]]],
+ system: str,
+ sep: List[Union[str, Dict[str, str]]],
+ stop_words: Optional[List[str]] = [],
+ use_history: Optional[bool] = True,
+) -> None:
+ template_class = Llama2Template if "llama2" in name else Template
+ templates[name] = template_class(
+ prefix=prefix,
+ prompt=prompt,
+ system=system,
+ sep=sep,
+ stop_words=stop_words,
+ use_history=use_history,
+ )
+
+
+r"""
+Supports language model inference without histories.
+"""
+register_template(
+ name="vanilla",
+ prefix=[],
+ prompt=["{{query}}"],
+ system="",
+ sep=[],
+ use_history=False,
+)
+
+r"""
+Supports language model for mistral sqlcoder-7b
+"""
+register_template(
+ name="mistral",
+ prefix=["{{system}}"],
+ prompt=["[INST] {{query}} [/INST]"],
+ system="",
+ sep=[],
+)
+
+
+r"""
+Default template.
+"""
+register_template(
+ name="default",
+ prefix=["{{system}}"],
+ prompt=["Human: {{query}}\nAssistant: "],
+ system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ sep=["\n"],
+)
+
+
+r"""
+Supports: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
+ https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
+ https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
+"""
+register_template(
+ name="llama2",
+ prefix=["<>\n{{system}}\n<>\n\n"],
+ prompt=["[INST] {{query}} [/INST] "],
+ system=(
+ "You are a helpful, respectful and honest assistant. "
+ "Always answer as helpfully as possible, while being safe. "
+ "Your answers should not include any harmful, unethical, "
+ "racist, sexist, toxic, dangerous, or illegal content. "
+ "Please ensure that your responses are socially unbiased and positive in nature.\n"
+ "If a question does not make any sense, or is not factually coherent, "
+ "explain why instead of answering something not correct. "
+ "If you don't know the answer to a question, please don't share false information."
+ ),
+ sep=[],
+)
+
+
+r"""
+Supports: https://github.com/ymcui/Chinese-LLaMA-Alpaca-2
+ https://huggingface.co/ziqingyang/chinese-alpaca-2-7b
+"""
+register_template(
+ name="llama2_zh",
+ prefix=["<>\n{{system}}\n<>\n\n"],
+ prompt=["[INST] {{query}} [/INST] "],
+ system="You are a helpful assistant. 你是一个乐于助人的助手。",
+ sep=[],
+)
+
+
+r"""
+Supports: https://huggingface.co/tatsu-lab/alpaca-7b-wdiff
+ https://github.com/ymcui/Chinese-LLaMA-Alpaca
+"""
+register_template(
+ name="alpaca",
+ prefix=["{{system}}"],
+ prompt=["### Instruction:\n{{query}}\n\n### Response:\n"],
+ system=(
+ "Below is an instruction that describes a task. "
+ "Write a response that appropriately completes the request."
+ ),
+ sep=["\n\n"],
+)
+
+
+r"""
+Supports: https://huggingface.co/lmsys/vicuna-7b-delta-v1.1
+ https://huggingface.co/lmsys/vicuna-13b-delta-v1.1
+"""
+register_template(
+ name="vicuna",
+ prefix=["{{system}}"],
+ prompt=["USER: {{query}} ASSISTANT: "],
+ system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ sep=[],
+)
+
+
+r"""
+Supports: https://huggingface.co/BelleGroup/BELLE-LLaMA-EXT-13B
+"""
+register_template(
+ name="belle",
+ prefix=["{{system}}"],
+ prompt=["Human: {{query}}\n\nBelle: "],
+ system="",
+ sep=["\n\n"],
+)
+
+
+r"""
+Supports: https://github.com/CVI-SZU/Linly
+"""
+register_template(
+ name="linly",
+ prefix=["{{system}}"],
+ prompt=["User: {{query}}\nBot: "],
+ system="",
+ sep=["\n"],
+)
+
+
+r"""
+Supports: https://github.com/Neutralzz/BiLLa
+"""
+register_template(
+ name="billa",
+ prefix=["{{system}}"],
+ prompt=["Human: {{query}}\nAssistant: "],
+ system="",
+ sep=["\n"],
+)
+
+
+r"""
+Supports: https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1
+"""
+register_template(
+ name="ziya",
+ prefix=["{{system}}"],
+ prompt=[{"token": ""}, ":{{query}}\n", {"token": ""}, ":"],
+ system="",
+ sep=["\n"],
+)
+
+
+r"""
+Supports: https://huggingface.co/qhduan/aquilachat-7b
+"""
+register_template(
+ name="aquila",
+ prefix=["{{system}}"],
+ prompt=["Human: {{query}}###Assistant: "],
+ system=(
+ "A chat between a curious human and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the human's questions."
+ ),
+ sep=["###"],
+)
+
+
+r"""
+Supports: https://huggingface.co/internlm/internlm-chat-7b
+"""
+register_template(
+ name="intern",
+ prefix=["{{system}}"],
+ prompt=["<|User|>:{{query}}", {"token": ""}, "\n<|Bot|>:"],
+ system="",
+ sep=["\n"],
+ stop_words=["", ""], # internlm cannot replace eos token
+)
+
+
+r"""
+Supports: https://huggingface.co/baichuan-inc/Baichuan-13B-Chat
+Used for training and inference of the fine-tuned models.
+"""
+register_template(
+ name="baichuan",
+ prefix=["{{system}}"],
+ prompt=[
+ {"token": ""}, # user token
+ "{{query}}",
+ {"token": ""}, # assistant token
+ ],
+ system="",
+ sep=[],
+ stop_words=[],
+)
+
+
+r"""
+Supports: https://huggingface.co/baichuan-inc/Baichuan-13B-Chat
+Used for inference of the original model.
+"""
+register_template(
+ name="baichuan_eval",
+ prefix=["{{system}}", {"token": ""}], # user token
+ prompt=["{{query}}", {"token": ""}], # assistant token
+ system="",
+ sep=[],
+ stop_words=[""], # user token
+)
+
+r"""
+Supports: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
+ https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
+Used for training and inference of the fine-tuned models.
+"""
+register_template(
+ name="baichuan2",
+ prefix=["{{system}}"],
+ prompt=[
+ {"token": ""}, # user token
+ "{{query}}",
+ {"token": ""}, # assistant token
+ ],
+ system="",
+ sep=[],
+)
+
+
+r"""
+Supports: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
+ https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
+Used for inference of the original model.
+"""
+register_template(
+ name="baichuan2_eval",
+ prefix=["{{system}}", {"token": ""}], # user token
+ prompt=["{{query}}", {"token": ""}], # assistant token
+ system="",
+ sep=[],
+ stop_words=[""], # user token
+)
+
+
+r"""
+Supports: https://huggingface.co/HuggingFaceH4/starchat-alpha
+ https://huggingface.co/HuggingFaceH4/starchat-beta
+
+"""
+register_template(
+ name="starchat",
+ prefix=[{"token": "<|system|>"}, "\n{{system}}", {"token": "<|end|>"}],
+ prompt=[
+ {"token": "<|user|>"},
+ "\n{{query}}",
+ {"token": "<|end|>"},
+ "\n",
+ {"token": "<|assistant|>"},
+ ],
+ system="",
+ sep=["\n"],
+ stop_words=["<|end|>"],
+)
+
+
+r"""
+Supports: https://huggingface.co/Qwen/Qwen-7B-Chat
+"""
+register_template(
+ name="chatml",
+ prefix=[{"token": "<|im_start|>"}, "system\n{{system}}", {"token": "<|im_end|>"}],
+ prompt=[
+ {"token": "<|im_start|>"},
+ "user\n{{query}}",
+ {"token": "<|im_end|>"},
+ "\n",
+ {"token": "<|im_start|>"},
+ "assistant\n",
+ ],
+ system="You are a helpful assistant.",
+ sep=["\n"],
+ stop_words=["<|im_end|>"],
+)
+
+
+r"""
+Supports: https://huggingface.co/THUDM/chatglm2-6b
+"""
+register_template(
+ name="chatglm2",
+ prefix=[{"token": "[gMASK]"}, {"token": "sop"}, "{{system}}"],
+ prompt=["[Round {{idx}}]\n\n问:{{query}}\n\n答:"],
+ system="",
+ sep=["\n\n"],
+)
+
+
+r"""
+Supports: https://huggingface.co/THUDM/chatglm3-6b
+"""
+register_template(
+ name="chatglm3",
+ prefix=[
+ {"token": "[gMASK]"},
+ {"token": "sop"},
+ {"token": "<|system|>"},
+ "\n",
+ "{{system}}",
+ ],
+ prompt=[
+ {"token": "<|user|>"},
+ "\n",
+ "{{query}}",
+ {"token": "<|assistant|>"},
+ "\n", # add an extra newline to avoid error in ChatGLM's process_response method
+ ],
+ system=(
+ "You are ChatGLM3, a large language model trained by Zhipu.AI. "
+ "Follow the user's instructions carefully. Respond using markdown."
+ ),
+ sep=[],
+ stop_words=["<|user|>", "<|observation|>"],
+)
+
+register_template(
+ name="chatglm3_raw", # the raw template for tool tuning
+ prefix=[
+ {"token": "[gMASK]"},
+ {"token": "sop"},
+ {"token": "<|system|>"},
+ "\n",
+ "{{system}}",
+ ],
+ prompt=[{"token": "<|user|>"}, "\n", "{{query}}", {"token": "<|assistant|>"}],
+ system=(
+ "You are ChatGLM3, a large language model trained by Zhipu.AI. "
+ "Follow the user's instructions carefully. Respond using markdown."
+ ),
+ sep=[],
+ stop_words=["<|user|>", "<|observation|>"],
+)
+
+
+r"""
+Supports: https://huggingface.co/xverse/XVERSE-13B-Chat
+"""
+register_template(
+ name="xverse",
+ prefix=["{{system}}"],
+ prompt=["Human: {{query}}\n\nAssistant: "],
+ system="",
+ sep=[],
+)
+
+
+def split_dataset(
+ dataset: Union["Dataset", "IterableDataset"],
+ data_args: "DataArguments",
+ training_args: "TrainingArguments",
+) -> Dict[str, "Dataset"]:
+ if training_args.do_train:
+ if data_args.val_size > 1e-6: # Split the dataset
+ if data_args.streaming:
+ val_set = dataset.take(int(data_args.val_size))
+ train_set = dataset.skip(int(data_args.val_size))
+ dataset = dataset.shuffle(
+ buffer_size=data_args.buffer_size, seed=training_args.seed
+ )
+ return {"train_dataset": train_set, "eval_dataset": val_set}
+ else:
+ val_size = (
+ int(data_args.val_size)
+ if data_args.val_size > 1
+ else data_args.val_size
+ )
+ dataset = dataset.train_test_split(
+ test_size=val_size, seed=training_args.seed
+ )
+ return {
+ "train_dataset": dataset["train"],
+ "eval_dataset": dataset["test"],
+ }
+ else:
+ if data_args.streaming:
+ dataset = dataset.shuffle(
+ buffer_size=data_args.buffer_size, seed=training_args.seed
+ )
+ return {"train_dataset": dataset}
+ else: # do_eval or do_predict
+ return {"eval_dataset": dataset}
+
+
+def preprocess_dataset(
+ dataset: Union["Dataset", "IterableDataset"],
+ tokenizer: "PreTrainedTokenizer",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ stage: Literal["pt", "sft", "rm", "ppo"],
+) -> Union["Dataset", "IterableDataset"]:
+ column_names = list(next(iter(dataset)).keys())
+ template = get_template_and_fix_tokenizer(data_args.template, tokenizer)
+
+ def construct_example(examples: Dict[str, List[Any]]) -> Generator[Any, None, None]:
+ for i in range(len(examples["prompt"])):
+ query, response = examples["prompt"][i], examples["response"][i]
+ query = (
+ query + "\n" + examples["query"][i]
+ if "query" in examples and examples["query"][i]
+ else query
+ )
+ history = examples["history"][i] if "history" in examples else None
+ system = examples["system"][i] if "system" in examples else None
+ yield query, response, history, system
+
+ def preprocess_pretrain_dataset(examples: Dict[str, List[Any]]) -> Dict[str, Any]:
+ # build grouped texts with format `X1 X2 X3 ...` (without )
+ if isinstance(
+ getattr(tokenizer, "tokenizer", None), tiktoken.Encoding
+ ): # for tiktoken tokenizer (Qwen)
+ kwargs = dict(allowed_special="all")
+ else:
+ kwargs = dict(add_special_tokens=False)
+
+ tokenized_examples = tokenizer(examples["prompt"], **kwargs)
+ concatenated_examples = {
+ k: list(chain(*tokenized_examples[k])) for k in tokenized_examples.keys()
+ }
+ total_length = len(concatenated_examples[list(concatenated_examples.keys())[0]])
+ block_size = data_args.max_source_length
+ # we drop the small remainder, and if the total_length < block_size, we exclude this batch
+ total_length = (total_length // block_size) * block_size
+ # split by chunks of max_source_length
+ result = {
+ k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
+ for k, t in concatenated_examples.items()
+ }
+ return result
+
+ def preprocess_supervised_dataset(examples: Dict[str, List[Any]]) -> Dict[str, Any]:
+ # build inputs with format ` X Y ` and labels with format ` ... Y `
+ # for multiturn examples, we only mask the prompt part in each prompt-response pair.
+ model_inputs = {"input_ids": [], "attention_mask": [], "labels": []}
+ max_length = data_args.max_source_length + data_args.max_target_length
+
+ for query, response, history, system in construct_example(examples):
+ input_ids, labels = [], []
+
+ for source_ids, target_ids in template.encode_multiturn(
+ tokenizer, query, response, history, system
+ ):
+ if len(source_ids) > data_args.max_source_length:
+ source_ids = source_ids[: data_args.max_source_length]
+ if len(target_ids) > data_args.max_target_length:
+ target_ids = target_ids[: data_args.max_target_length]
+
+ if len(input_ids) + len(source_ids) + len(target_ids) > max_length:
+ break
+
+ input_ids += source_ids + target_ids
+ labels += [IGNORE_INDEX] * len(source_ids) + target_ids
+
+ model_inputs["input_ids"].append(input_ids)
+ model_inputs["attention_mask"].append([1] * len(input_ids))
+ model_inputs["labels"].append(labels)
+
+ return model_inputs
+
+ def preprocess_unsupervised_dataset(
+ examples: Dict[str, List[Any]]
+ ) -> Dict[str, Any]:
+ # build inputs with format ` X` and labels with format `Y `
+ model_inputs = {"input_ids": [], "attention_mask": [], "labels": []}
+
+ for query, response, history, system in construct_example(examples):
+ source_ids, target_ids = template.encode_oneturn(
+ tokenizer, query, response, history, system
+ )
+
+ if len(source_ids) > data_args.max_source_length:
+ source_ids = source_ids[: data_args.max_source_length]
+ if len(target_ids) > data_args.max_target_length:
+ target_ids = target_ids[: data_args.max_target_length]
+
+ model_inputs["input_ids"].append(source_ids)
+ model_inputs["attention_mask"].append([1] * len(source_ids))
+ model_inputs["labels"].append(target_ids)
+
+ return model_inputs
+
+ def preprocess_pairwise_dataset(
+ examples: Dict[str, List[Any]]
+ ) -> Dict[str, List[List[int]]]:
+ # build input pairs with format ` X`, `Y1 ` and `Y2 ` for rm stage
+ model_inputs = {"prompt_ids": [], "chosen_ids": [], "rejected_ids": []}
+ for query, response, history, system in construct_example(examples):
+ if not (
+ isinstance(query, str)
+ and isinstance(response, list)
+ and query != ""
+ and len(response) > 1
+ ):
+ continue
+
+ prompt_ids, chosen_ids = template.encode_oneturn(
+ tokenizer, query, response[0], history, system
+ )
+ _, rejected_ids = template.encode_oneturn(
+ tokenizer, query, response[1], history, system
+ )
+
+ # if template.efficient_eos:
+ chosen_ids += [tokenizer.eos_token_id]
+ rejected_ids += [tokenizer.eos_token_id]
+
+ source_len, target_len = len(prompt_ids), max(
+ len(chosen_ids), len(rejected_ids)
+ )
+ max_source_len, max_target_len = infer_max_len(
+ source_len, target_len, data_args
+ )
+ if source_len > max_source_len:
+ prompt_ids = prompt_ids[:max_source_len]
+ if target_len > max_target_len:
+ chosen_ids = chosen_ids[:max_target_len]
+ rejected_ids = rejected_ids[:max_target_len]
+
+ model_inputs["prompt_ids"].append(prompt_ids)
+ model_inputs["chosen_ids"].append(chosen_ids)
+ model_inputs["rejected_ids"].append(rejected_ids)
+
+ return model_inputs
+
+ def print_pairwise_dataset_example(example: Dict[str, List[int]]) -> None:
+ print("prompt_ids:\n{}".format(example["prompt_ids"]))
+ print(
+ "prompt:\n{}".format(
+ tokenizer.decode(example["prompt_ids"], skip_special_tokens=False)
+ )
+ )
+ print("chosen_ids:\n{}".format(example["chosen_ids"]))
+ print(
+ "chosen:\n{}".format(
+ tokenizer.decode(example["chosen_ids"], skip_special_tokens=False)
+ )
+ )
+ print("rejected_ids:\n{}".format(example["rejected_ids"]))
+ print(
+ "rejected:\n{}".format(
+ tokenizer.decode(example["rejected_ids"], skip_special_tokens=False)
+ )
+ )
+
+ def print_supervised_dataset_example(example):
+ print("input_ids:\n{}".format(example["input_ids"]))
+ print(
+ "inputs:\n{}".format(
+ tokenizer.decode(example["input_ids"], skip_special_tokens=False)
+ )
+ )
+ print("label_ids:\n{}".format(example["labels"]))
+ print(
+ "labels:\n{}".format(
+ tokenizer.decode(
+ [
+ token_id if token_id != IGNORE_INDEX else tokenizer.pad_token_id
+ for token_id in example["labels"]
+ ],
+ skip_special_tokens=False,
+ )
+ )
+ )
+
+ if stage == "pt":
+ pass
+ elif stage == "sft" and not training_args.predict_with_generate:
+ preprocess_function = preprocess_supervised_dataset
+ print_function = print_supervised_dataset_example
+ elif stage == "rm":
+ print(111111111111111111)
+ preprocess_function = preprocess_pairwise_dataset
+ print_function = print_pairwise_dataset_example
+ else:
+ pass
+
+ with training_args.main_process_first(desc="dataset map pre-processing"):
+ kwargs = {}
+ if not data_args.streaming:
+ kwargs = dict(
+ num_proc=data_args.preprocessing_num_workers,
+ load_from_cache_file=not data_args.overwrite_cache,
+ desc="Running tokenizer on dataset",
+ )
+
+ dataset = dataset.map(
+ preprocess_function, batched=True, remove_columns=column_names, **kwargs
+ )
+
+ print_function(next(iter(dataset)))
+ return dataset
+
+
+## used in get_dataset
+def checksum(data_files: List[str], file_sha1: Optional[str] = None) -> None:
+ if file_sha1 is None:
+ logger.warning(
+ "Checksum failed: missing SHA-1 hash value in dataset_info.json."
+ )
+ return
+
+ if len(data_files) != 1:
+ logger.warning("Checksum failed: too many files.")
+ return
+
+ with open(data_files[0], "rb") as f:
+ sha1 = hashlib.sha1(f.read()).hexdigest()
+ if sha1 != file_sha1:
+ logger.warning(
+ "Checksum failed: mismatched SHA-1 hash value at {}.".format(
+ data_files[0]
+ )
+ )
+
+
+def get_dataset(
+ model_args: "ModelArguments", data_args: "DataArguments"
+) -> Union["Dataset", "IterableDataset"]:
+ max_samples = data_args.max_samples
+ all_datasets: List[
+ Union["Dataset", "IterableDataset"]
+ ] = [] # support multiple datasets
+
+ for dataset_attr in data_args.dataset_list:
+ logger.info("Loading dataset {}...".format(dataset_attr))
+
+ if dataset_attr.load_from == "hf_hub":
+ data_path = dataset_attr.dataset_name
+ data_files = None
+ elif dataset_attr.load_from == "script":
+ data_path = os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ data_files = None
+ elif dataset_attr.load_from == "file":
+ data_path = None
+ data_files: List[str] = []
+
+ if os.path.isdir(
+ os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ ): # directory
+ for file_name in os.listdir(
+ os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ ):
+ data_files.append(
+ os.path.join(
+ data_args.dataset_dir, dataset_attr.dataset_name, file_name
+ )
+ )
+ if data_path is None:
+ data_path = EXT2TYPE.get(file_name.split(".")[-1], None)
+ else:
+ assert data_path == EXT2TYPE.get(
+ file_name.split(".")[-1], None
+ ), "file type does not match."
+ elif os.path.isfile(
+ os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ ): # single file
+ data_files.append(
+ os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ )
+ data_path = EXT2TYPE.get(dataset_attr.dataset_name.split(".")[-1], None)
+ else:
+ raise ValueError("File not found.")
+
+ assert data_path, "File extension must be txt, csv, json or jsonl."
+ checksum(data_files, dataset_attr.dataset_sha1)
+ else:
+ raise NotImplementedError
+
+ dataset = load_dataset(
+ data_path,
+ data_files=data_files,
+ split=data_args.split,
+ cache_dir=model_args.cache_dir,
+ streaming=data_args.streaming,
+ use_auth_token=True if model_args.use_auth_token else None,
+ )
+
+ if max_samples is not None:
+ max_samples_temp = min(len(dataset), max_samples)
+ dataset = dataset.select(range(max_samples_temp))
+
+ for column_name in ["prompt", "query", "response", "history"]: # align datasets
+ if (
+ getattr(dataset_attr, column_name)
+ and getattr(dataset_attr, column_name) != column_name
+ ):
+ dataset = dataset.rename_column(
+ getattr(dataset_attr, column_name), column_name
+ )
+
+ if dataset_attr.system_prompt: # add system prompt
+ if data_args.streaming:
+ dataset = dataset.map(lambda _: {"system": dataset_attr.system_prompt})
+ else:
+ dataset = dataset.add_column(
+ "system", [dataset_attr.system_prompt] * len(dataset)
+ )
+
+ all_datasets.append(dataset)
+
+ if len(data_args.dataset_list) == 1:
+ return all_datasets[0]
+ elif data_args.mix_strategy == "concat":
+ if data_args.streaming:
+ logger.warning(
+ "The samples between different datasets will not be mixed in streaming mode."
+ )
+ return concatenate_datasets(all_datasets)
+ elif data_args.mix_strategy.startswith("interleave"):
+ if not data_args.streaming:
+ logger.warning(
+ "We recommend using `mix_strategy=concat` in non-streaming mode."
+ )
+ stopping_strategy = (
+ "first_exhausted"
+ if data_args.mix_strategy.endswith("under")
+ else "all_exhausted"
+ )
+ return interleave_datasets(
+ all_datasets,
+ data_args.interleave_probs,
+ stopping_strategy=stopping_strategy,
+ )
+ else:
+ raise ValueError("Unknown mixing strategy.")
+
+
+def split_train_eval(
+ dataset: Dataset,
+ do_eval: bool = False,
+ eval_dataset_size: float = 0.1,
+ max_eval_samples: int = None,
+ do_train: bool = True,
+ max_train_samples: int = None,
+) -> Dict[str, Dataset]:
+ """
+ Prepare the training and evaluation datasets for a machine learning model.
+
+ Args:
+ dataset (DatasetDict): The complete dataset containing train, validation, and test splits.
+ do_eval (bool, optional): Whether to use an evaluation dataset or not. Defaults to False.
+ eval_dataset_size (float, optional): The size of the validation set if splitting from the training data.
+ Ignored if `do_eval` is False. Defaults to 0.2.
+ max_eval_samples (int, optional): The maximum number of samples to keep in the evaluation dataset.
+ Ignored if `do_eval` is False or `None`. Defaults to None.
+ do_train (bool, optional): Whether to use a training dataset or not. Defaults to True.
+ max_train_samples (int, optional): The maximum number of samples to keep in the training dataset.
+ Ignored if `do_train` is False or `None`. Defaults to None.
+
+ Returns:
+ Dict[str, Dataset]: A dictionary containing the prepared training and evaluation datasets
+ (if used), where the keys are 'train' and 'eval', respectively.
+ """
+ if not isinstance(dataset, DatasetDict):
+ raise TypeError("The 'dataset' argument must be a DatasetDict object.")
+
+ train_dataset, eval_dataset = None, None
+ # Prepare evaluation dataset
+ if do_eval:
+ if "eval" in dataset:
+ eval_dataset = dataset["eval"]
+ else:
+ # Split train dataset in train and validation according to `eval_dataset_size`
+ print(
+ f"Splitting the dataset into train and validation according to `eval_dataset_size`: {eval_dataset_size}"
+ )
+ dataset = dataset["train"].train_test_split(
+ test_size=eval_dataset_size, shuffle=True, seed=42
+ )
+ eval_dataset = dataset["test"]
+
+ # Reduce evaluation dataset size (if specified)
+ print(
+ f"You have set the max_eval_samples: {max_eval_samples}, will do sampling ..."
+ )
+ if max_eval_samples is not None and len(eval_dataset) > max_eval_samples:
+ eval_dataset = eval_dataset.select(np.arange(max_eval_samples))
+
+ # Prepare training dataset
+ if do_train:
+ train_dataset = dataset["train"]
+
+ # Reduce training dataset size (if specified)
+ print(
+ f"You have set the max_train_samples: {max_train_samples}, will do sampling ..."
+ )
+ if max_train_samples is not None and len(train_dataset) > max_train_samples:
+ train_dataset = train_dataset.select(np.arange(max_train_samples))
+
+ return train_dataset, eval_dataset
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluation.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluation.py
new file mode 100644
index 00000000..8b172b6a
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluation.py
@@ -0,0 +1,101 @@
+import os
+import sys
+import argparse
+import importlib
+import json
+import prettytable as pt
+from evaluator.evaluator import Evaluator
+from evaluator.similarity_evaluator import SimilarityEvaluator
+
+
+def evaluate(gold, predict, etype, impl):
+ log_file = open(f"{os.path.dirname(__file__)}/../output/logs/eval.log", "w")
+ log_lines = []
+
+ with open(gold) as f:
+ gseq_one = []
+ for l in f.readlines():
+ if len(l.strip()) == 0:
+ # when some predict is none, support it can continue work
+ gseq_one.append("no out")
+ else:
+ gseq_one.append(l.strip())
+
+ with open(predict) as f:
+ plist = []
+ pseq_one = []
+ for l in f.readlines():
+ if len(l.strip()) == 0:
+ # when some predict is none, support it can continue work
+ pseq_one.append("no out")
+
+ else:
+ pseq_one.append(l.strip())
+
+ assert len(gseq_one) == len(
+ pseq_one
+ ), "number of predicted queries and gold standard queries must equal"
+
+ score_total = 0
+ if etype == "similarity":
+ evaluator = SimilarityEvaluator()
+ elif etype == "grammar":
+ model_path = f"evaluator.impl.{impl}.grammar_evaluator"
+ m = importlib.import_module(model_path)
+ GrammarEvaluator = getattr(m, "GrammarEvaluator")
+ evaluator = GrammarEvaluator()
+
+ total = 0
+ for i in range(len(gseq_one)):
+ score = evaluator.evaluate(pseq_one[i], gseq_one[i])
+ if score != -1:
+ score_total += score
+ total += 1
+ tmp_log = {}
+ tmp_log["pred"] = pseq_one[i]
+ tmp_log["gold"] = gseq_one[i]
+ tmp_log["score"] = score
+ log_lines.append(tmp_log)
+
+ json.dump(log_lines, log_file, ensure_ascii=False, indent=4)
+
+ tb = pt.PrettyTable()
+ tb.field_names = ["Evaluation Type", "Total Count", "Accuracy"]
+ tb.add_row([etype, len(gseq_one), "{:.3f}".format(score_total / total)])
+ print(tb)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--input",
+ dest="input",
+ type=str,
+ help="the path to the input file",
+ required=True,
+ )
+ parser.add_argument(
+ "--gold", dest="gold", type=str, help="the path to the gold queries", default=""
+ )
+ parser.add_argument(
+ "--etype",
+ dest="etype",
+ type=str,
+ default="similarity",
+ help="evaluation type, exec for test suite accuracy, match for the original exact set match accuracy",
+ choices=("similarity", "grammar"),
+ )
+ parser.add_argument(
+ "--impl",
+ dest="impl",
+ type=str,
+ default="tugraph-analytics",
+ help="implementation folder for grammar evaluator",
+ )
+ args = parser.parse_args()
+
+ # Print args
+ print(f"params as fllows \n {args}")
+
+ # Second, evaluate the predicted GQL queries
+ evaluate(args.gold, args.input, args.etype, args.impl)
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/__init__.py
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/__init__.py
@@ -0,0 +1 @@
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/evaluator.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/evaluator.py
new file mode 100644
index 00000000..8b97babf
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/evaluator.py
@@ -0,0 +1,3 @@
+class Evaluator:
+ def evaluate(self, query_predict, query_gold):
+ return 1
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/__init__.py
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/__init__.py
@@ -0,0 +1 @@
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/geaflow-dsl-parser-0.5.0-jar-with-dependencies.jar b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/geaflow-dsl-parser-0.5.0-jar-with-dependencies.jar
new file mode 100644
index 00000000..caefe106
Binary files /dev/null and b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/geaflow-dsl-parser-0.5.0-jar-with-dependencies.jar differ
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/grammar_evaluator.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/grammar_evaluator.py
new file mode 100644
index 00000000..9d474c1a
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-analytics/grammar_evaluator.py
@@ -0,0 +1,28 @@
+import jpype
+import os.path
+
+
+class GrammarEvaluator:
+ def __init__(self):
+ jvmPath = jpype.getDefaultJVMPath()
+
+ # gql grammar paerser from tugraph-analytics https://github.com/TuGraph-family/tugraph-analytics/tree/master/geaflow/geaflow-dsl/geaflow-dsl-parser/src/main/java/com/antgroup/geaflow/dsl/parser
+ jarpath = (
+ os.path.dirname(__file__)
+ + "/geaflow-dsl-parser-0.5.0-jar-with-dependencies.jar"
+ )
+ jvm_cp = f"-Djava.class.path={jarpath}"
+ jpype.startJVM(jvmPath, "-ea", classpath=[jarpath], convertStrings=False)
+ JDClass = jpype.JClass("com.antgroup.geaflow.dsl.parser.GeaFlowDSLParser")
+ self.jd = JDClass()
+
+ def evaluate(self, query_predict, query_gold):
+ try:
+ result_gold = self.jd.parseStatement(query_gold)
+ try:
+ result_predict = self.jd.parseStatement(query_predict)
+ return 1
+ except jpype.JException as e_query:
+ return 0
+ except jpype.JException as e_gold:
+ return -1
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.g4 b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.g4
new file mode 100644
index 00000000..0fee20da
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.g4
@@ -0,0 +1,695 @@
+/*
+ * Copyright (c) 2015-2019 "Neo Technology,"
+ * Network Engine for Objects in Lund AB [http://neotechnology.com]
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * Attribution Notice under the terms of the Apache License 2.0
+ *
+ * This work was created by the collective efforts of the openCypher community.
+ * Without limiting the terms of Section 6, any Derivative Work that is not
+ * approved by the public consensus process of the openCypher Implementers Group
+ * should not be described as “Cypher” (and Cypher® is a registered trademark of
+ * Neo4j Inc.) or as "openCypher". Extensions by implementers or prototypes or
+ * proposals for change that have been documented or implemented should only be
+ * described as "implementation extensions to Cypher" or as "proposed changes to
+ * Cypher that are not yet approved by the openCypher community".
+ */
+/*
+ * @description Cypher for TuGraph (derived from Cypher.g4 in openCypher@b724b18)
+ * @created WangTao
+ * @date 2019.08.27
+ */
+grammar Lcypher;
+
+oC_Cypher : SP? oC_Statement ( SP? ';' )? SP? EOF ;
+
+oC_Statement : oC_Query
+ | EXPLAIN SP? oC_Query
+ | PROFILE SP? oC_Query
+ ;
+
+EXPLAIN : ( 'E' | 'e' ) ( 'X' | 'x' ) ( 'P' | 'p' ) ( 'L' | 'l' ) ( 'A' | 'a' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ;
+
+PROFILE : ( 'P' | 'p' ) ( 'R' | 'r' ) ( 'O' | 'o' ) ( 'F' | 'f' ) ( 'I' | 'i' ) ( 'L' | 'l' ) ( 'E' | 'e' ) ;
+
+oC_Query : oC_RegularQuery
+ | oC_StandaloneCall
+ ;
+
+oC_RegularQuery : oC_SingleQuery ( SP? oC_Union )* ;
+
+oC_Union : ( UNION SP ALL SP? oC_SingleQuery )
+ | ( UNION SP? oC_SingleQuery )
+ ;
+
+UNION : ( 'U' | 'u' ) ( 'N' | 'n' ) ( 'I' | 'i' ) ( 'O' | 'o' ) ( 'N' | 'n' ) ;
+
+ALL : ( 'A' | 'a' ) ( 'L' | 'l' ) ( 'L' | 'l' ) ;
+
+oC_SingleQuery : oC_SinglePartQuery
+ | oC_MultiPartQuery
+ ;
+
+oC_SinglePartQuery : ( ( oC_ReadingClause SP? )* oC_Return )
+ | ( ( oC_ReadingClause SP? )* oC_UpdatingClause ( SP? oC_UpdatingClause )* ( SP? oC_Return )? )
+ ;
+
+oC_MultiPartQuery : ( ( oC_ReadingClause SP? )* ( oC_UpdatingClause SP? )* oC_With SP? )+ oC_SinglePartQuery ;
+
+oC_UpdatingClause : oC_Create
+ | oC_Merge
+ | oC_Delete
+ | oC_Set
+ | oC_Remove
+ ;
+
+oC_ReadingClause : oC_Match
+ | oC_Unwind
+ | oC_InQueryCall
+ ;
+
+oC_Match : ( OPTIONAL_ SP )? MATCH SP? oC_Pattern ( SP? oC_Hint )* ( SP? oC_Where )? ;
+
+OPTIONAL_ : ( 'O' | 'o' ) ( 'P' | 'p' ) ( 'T' | 't' ) ( 'I' | 'i' ) ( 'O' | 'o' ) ( 'N' | 'n' ) ( 'A' | 'a' ) ( 'L' | 'l' ) ;
+
+MATCH : ( 'M' | 'm' ) ( 'A' | 'a' ) ( 'T' | 't' ) ( 'C' | 'c' ) ( 'H' | 'h' ) ;
+
+oC_Unwind : UNWIND SP? oC_Expression SP AS SP oC_Variable ;
+
+UNWIND : ( 'U' | 'u' ) ( 'N' | 'n' ) ( 'W' | 'w' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ;
+
+AS : ( 'A' | 'a' ) ( 'S' | 's' ) ;
+
+oC_Merge : MERGE SP? oC_PatternPart ( SP oC_MergeAction )* ;
+
+MERGE : ( 'M' | 'm' ) ( 'E' | 'e' ) ( 'R' | 'r' ) ( 'G' | 'g' ) ( 'E' | 'e' ) ;
+
+oC_MergeAction : ( ON SP MATCH SP oC_Set )
+ | ( ON SP CREATE SP oC_Set )
+ ;
+
+ON : ( 'O' | 'o' ) ( 'N' | 'n' ) ;
+
+CREATE : ( 'C' | 'c' ) ( 'R' | 'r' ) ( 'E' | 'e' ) ( 'A' | 'a' ) ( 'T' | 't' ) ( 'E' | 'e' ) ;
+
+oC_Create : CREATE SP? oC_Pattern ;
+
+oC_Set : SET SP? oC_SetItem ( SP? ',' SP? oC_SetItem )* ;
+
+SET : ( 'S' | 's' ) ( 'E' | 'e' ) ( 'T' | 't' ) ;
+
+oC_SetItem : ( oC_PropertyExpression SP? '=' SP? oC_Expression )
+ | ( oC_Variable SP? '=' SP? oC_Expression )
+ | ( oC_Variable SP? '+=' SP? oC_Expression )
+ | ( oC_Variable SP? oC_NodeLabels )
+ ;
+
+oC_Delete : ( DETACH SP )? DELETE_ SP? oC_Expression ( SP? ',' SP? oC_Expression )* ;
+
+DETACH : ( 'D' | 'd' ) ( 'E' | 'e' ) ( 'T' | 't' ) ( 'A' | 'a' ) ( 'C' | 'c' ) ( 'H' | 'h' ) ;
+
+DELETE_ : ( 'D' | 'd' ) ( 'E' | 'e' ) ( 'L' | 'l' ) ( 'E' | 'e' ) ( 'T' | 't' ) ( 'E' | 'e' ) ;
+
+oC_Remove : REMOVE SP oC_RemoveItem ( SP? ',' SP? oC_RemoveItem )* ;
+
+REMOVE : ( 'R' | 'r' ) ( 'E' | 'e' ) ( 'M' | 'm' ) ( 'O' | 'o' ) ( 'V' | 'v' ) ( 'E' | 'e' ) ;
+
+oC_RemoveItem : ( oC_Variable oC_NodeLabels )
+ | oC_PropertyExpression
+ ;
+
+oC_InQueryCall : CALL SP oC_ExplicitProcedureInvocation ( SP? YIELD SP oC_YieldItems )? ;
+
+CALL : ( 'C' | 'c' ) ( 'A' | 'a' ) ( 'L' | 'l' ) ( 'L' | 'l' ) ;
+
+YIELD : ( 'Y' | 'y' ) ( 'I' | 'i' ) ( 'E' | 'e' ) ( 'L' | 'l' ) ( 'D' | 'd' ) ;
+
+oC_StandaloneCall : CALL SP ( oC_ExplicitProcedureInvocation | oC_ImplicitProcedureInvocation ) ( SP YIELD SP oC_YieldItems )? ;
+
+oC_YieldItems : ( '*' | ( oC_YieldItem ( SP? ',' SP? oC_YieldItem )* ) ) ( SP? oC_Where )? ;
+
+oC_YieldItem : ( oC_ProcedureResultField SP AS SP )? oC_Variable ;
+
+oC_With : WITH ( SP? DISTINCT )? SP oC_ReturnBody ( SP? oC_Where )? ;
+
+WITH : ( 'W' | 'w' ) ( 'I' | 'i' ) ( 'T' | 't' ) ( 'H' | 'h' ) ;
+
+DISTINCT : ( 'D' | 'd' ) ( 'I' | 'i' ) ( 'S' | 's' ) ( 'T' | 't' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'C' | 'c' ) ( 'T' | 't' ) ;
+
+oC_Return : RETURN ( SP? DISTINCT )? SP oC_ReturnBody ;
+
+RETURN : ( 'R' | 'r' ) ( 'E' | 'e' ) ( 'T' | 't' ) ( 'U' | 'u' ) ( 'R' | 'r' ) ( 'N' | 'n' ) ;
+
+oC_ReturnBody : oC_ReturnItems ( SP oC_Order )? ( SP oC_Skip )? ( SP oC_Limit )? ;
+
+oC_ReturnItems : ( '*' ( SP? ',' SP? oC_ReturnItem )* )
+ | ( oC_ReturnItem ( SP? ',' SP? oC_ReturnItem )* )
+ ;
+
+oC_ReturnItem : ( oC_Expression SP AS SP oC_Variable )
+ | oC_Expression
+ ;
+
+oC_Order : ORDER SP BY SP oC_SortItem ( ',' SP? oC_SortItem )* ;
+
+ORDER : ( 'O' | 'o' ) ( 'R' | 'r' ) ( 'D' | 'd' ) ( 'E' | 'e' ) ( 'R' | 'r' ) ;
+
+BY : ( 'B' | 'b' ) ( 'Y' | 'y' ) ;
+
+oC_Skip : L_SKIP SP oC_Expression ;
+
+L_SKIP : ( 'S' | 's' ) ( 'K' | 'k' ) ( 'I' | 'i' ) ( 'P' | 'p' ) ;
+
+oC_Limit : LIMIT SP oC_Expression ;
+
+LIMIT : ( 'L' | 'l' ) ( 'I' | 'i' ) ( 'M' | 'm' ) ( 'I' | 'i' ) ( 'T' | 't' ) ;
+
+oC_SortItem : oC_Expression ( SP? ( ASCENDING | ASC | DESCENDING | DESC ) )? ;
+
+ASCENDING : ( 'A' | 'a' ) ( 'S' | 's' ) ( 'C' | 'c' ) ( 'E' | 'e' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'G' | 'g' ) ;
+
+ASC : ( 'A' | 'a' ) ( 'S' | 's' ) ( 'C' | 'c' ) ;
+
+DESCENDING : ( 'D' | 'd' ) ( 'E' | 'e' ) ( 'S' | 's' ) ( 'C' | 'c' ) ( 'E' | 'e' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'G' | 'g' ) ;
+
+DESC : ( 'D' | 'd' ) ( 'E' | 'e' ) ( 'S' | 's' ) ( 'C' | 'c' ) ;
+
+oC_Hint : USING SP JOIN SP ON SP oC_Variable
+ | USING SP START SP ON SP oC_Variable
+ ;
+
+USING : ( 'U' | 'u' ) ( 'S' | 's' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'G' | 'g' ) ;
+
+JOIN : ( 'J' | 'j' ) ( 'O' | 'o' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ;
+
+START : ( 'S' | 's' ) ( 'T' | 't' ) ( 'A' | 'a' ) ( 'R' | 'r' ) ( 'T' | 't' ) ;
+
+oC_Where : WHERE SP oC_Expression ;
+
+WHERE : ( 'W' | 'w' ) ( 'H' | 'h' ) ( 'E' | 'e' ) ( 'R' | 'r' ) ( 'E' | 'e' ) ;
+
+oC_Pattern : oC_PatternPart ( SP? ',' SP? oC_PatternPart )* ;
+
+oC_PatternPart : ( oC_Variable SP? '=' SP? oC_AnonymousPatternPart )
+ | oC_AnonymousPatternPart
+ ;
+
+oC_AnonymousPatternPart : oC_PatternElement ;
+
+oC_PatternElement : ( oC_NodePattern ( SP? oC_PatternElementChain )* )
+ | ( '(' oC_PatternElement ')' )
+ ;
+
+oC_NodePattern : '(' SP? ( oC_Variable SP? )? ( oC_NodeLabels SP? )? ( oC_Properties SP? )? ')' ;
+
+oC_PatternElementChain : oC_RelationshipPattern SP? oC_NodePattern ;
+
+oC_RelationshipPattern : ( oC_LeftArrowHead SP? oC_Dash SP? oC_RelationshipDetail? SP? oC_Dash SP? oC_RightArrowHead )
+ | ( oC_LeftArrowHead SP? oC_Dash SP? oC_RelationshipDetail? SP? oC_Dash )
+ | ( oC_Dash SP? oC_RelationshipDetail? SP? oC_Dash SP? oC_RightArrowHead )
+ | ( oC_Dash SP? oC_RelationshipDetail? SP? oC_Dash )
+ ;
+
+oC_RelationshipDetail : '[' SP? ( oC_Variable SP? )? ( oC_RelationshipTypes SP? )? oC_RangeLiteral? ( oC_Properties SP? )? ']' ;
+
+oC_Properties : oC_MapLiteral
+ | oC_Parameter
+ ;
+
+oC_RelationshipTypes : ':' SP? oC_RelTypeName ( SP? '|' ':'? SP? oC_RelTypeName )* ;
+
+oC_NodeLabels : oC_NodeLabel ( SP? oC_NodeLabel )* ;
+
+oC_NodeLabel : ':' SP? oC_LabelName ;
+
+oC_RangeLiteral : '*' SP? ( oC_IntegerLiteral SP? )? ( '..' SP? ( oC_IntegerLiteral SP? )? )? ;
+
+oC_LabelName : oC_SchemaName ;
+
+oC_RelTypeName : oC_SchemaName ;
+
+oC_Expression : oC_OrExpression ;
+
+oC_OrExpression : oC_XorExpression ( SP OR SP oC_XorExpression )* ;
+
+OR : ( 'O' | 'o' ) ( 'R' | 'r' ) ;
+
+oC_XorExpression : oC_AndExpression ( SP XOR SP oC_AndExpression )* ;
+
+XOR : ( 'X' | 'x' ) ( 'O' | 'o' ) ( 'R' | 'r' ) ;
+
+oC_AndExpression : oC_NotExpression ( SP AND SP oC_NotExpression )* ;
+
+AND : ( 'A' | 'a' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ;
+
+oC_NotExpression : ( NOT SP? )* oC_ComparisonExpression ;
+
+NOT : ( 'N' | 'n' ) ( 'O' | 'o' ) ( 'T' | 't' ) ;
+
+oC_ComparisonExpression : oC_AddOrSubtractExpression ( SP? oC_PartialComparisonExpression )* ;
+
+oC_AddOrSubtractExpression : oC_MultiplyDivideModuloExpression ( ( SP? '+' SP? oC_MultiplyDivideModuloExpression ) | ( SP? '-' SP? oC_MultiplyDivideModuloExpression ) )* ;
+
+oC_MultiplyDivideModuloExpression : oC_PowerOfExpression ( ( SP? '*' SP? oC_PowerOfExpression ) | ( SP? '/' SP? oC_PowerOfExpression ) | ( SP? '%' SP? oC_PowerOfExpression ) )* ;
+
+oC_PowerOfExpression : oC_UnaryAddOrSubtractExpression ( SP? '^' SP? oC_UnaryAddOrSubtractExpression )* ;
+
+oC_UnaryAddOrSubtractExpression : ( ( '+' | '-' ) SP? )* oC_StringListNullOperatorExpression ;
+
+oC_StringListNullOperatorExpression : oC_PropertyOrLabelsExpression ( oC_StringOperatorExpression | oC_ListOperatorExpression | oC_NullOperatorExpression )* ;
+
+oC_ListOperatorExpression : ( SP IN SP? oC_PropertyOrLabelsExpression )
+ | ( SP? '[' oC_Expression ']' )
+ | ( SP? '[' oC_Expression? '..' oC_Expression? ']' )
+ ;
+
+IN : ( 'I' | 'i' ) ( 'N' | 'n' ) ;
+
+oC_StringOperatorExpression : ( ( SP STARTS SP WITH ) | ( SP ENDS SP WITH ) | ( SP CONTAINS ) | ( SP REGEXP ) ) SP? oC_PropertyOrLabelsExpression ;
+
+STARTS : ( 'S' | 's' ) ( 'T' | 't' ) ( 'A' | 'a' ) ( 'R' | 'r' ) ( 'T' | 't' ) ( 'S' | 's' ) ;
+
+ENDS : ( 'E' | 'e' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ( 'S' | 's' ) ;
+
+CONTAINS : ( 'C' | 'c' ) ( 'O' | 'o' ) ( 'N' | 'n' ) ( 'T' | 't' ) ( 'A' | 'a' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'S' | 's' ) ;
+
+REGEXP: ( 'R' | 'r' ) ( 'E' | 'e' ) ( 'G' | 'g' ) ( 'E' | 'e' ) ( 'X' | 'x' ) ( 'P' | 'p' ) ;
+
+oC_NullOperatorExpression : ( SP IS SP NULL_ )
+ | ( SP IS SP NOT SP NULL_ )
+ ;
+
+IS : ( 'I' | 'i' ) ( 'S' | 's' ) ;
+
+NULL_ : ( 'N' | 'n' ) ( 'U' | 'u' ) ( 'L' | 'l' ) ( 'L' | 'l' ) ;
+
+oC_PropertyOrLabelsExpression : oC_Atom ( SP? oC_PropertyLookup )* ( SP? oC_NodeLabels )? ;
+
+oC_Atom : oC_Literal
+ | oC_Parameter
+ | oC_CaseExpression
+ | ( COUNT SP? '(' SP? '*' SP? ')' )
+ | oC_ListComprehension
+ | oC_PatternComprehension
+ | ( ALL SP? '(' SP? oC_FilterExpression SP? ')' )
+ | ( ANY SP? '(' SP? oC_FilterExpression SP? ')' )
+ | ( NONE SP? '(' SP? oC_FilterExpression SP? ')' )
+ | ( SINGLE SP? '(' SP? oC_FilterExpression SP? ')' )
+ | oC_RelationshipsPattern
+ | oC_ParenthesizedExpression
+ | oC_FunctionInvocation
+ | oC_Variable
+ ;
+
+COUNT : ( 'C' | 'c' ) ( 'O' | 'o' ) ( 'U' | 'u' ) ( 'N' | 'n' ) ( 'T' | 't' ) ;
+
+ANY : ( 'A' | 'a' ) ( 'N' | 'n' ) ( 'Y' | 'y' ) ;
+
+NONE : ( 'N' | 'n' ) ( 'O' | 'o' ) ( 'N' | 'n' ) ( 'E' | 'e' ) ;
+
+SINGLE : ( 'S' | 's' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'G' | 'g' ) ( 'L' | 'l' ) ( 'E' | 'e' ) ;
+
+oC_Literal : oC_NumberLiteral
+ | StringLiteral
+ | oC_BooleanLiteral
+ | NULL_
+ | oC_MapLiteral
+ | oC_ListLiteral
+ ;
+
+oC_BooleanLiteral : TRUE_
+ | FALSE_
+ ;
+
+TRUE_ : ( 'T' | 't' ) ( 'R' | 'r' ) ( 'U' | 'u' ) ( 'E' | 'e' ) ;
+
+FALSE_ : ( 'F' | 'f' ) ( 'A' | 'a' ) ( 'L' | 'l' ) ( 'S' | 's' ) ( 'E' | 'e' ) ;
+
+oC_ListLiteral : '[' SP? ( oC_Expression SP? ( ',' SP? oC_Expression SP? )* )? ']' ;
+
+oC_PartialComparisonExpression : ( '=' SP? oC_AddOrSubtractExpression )
+ | ( '<>' SP? oC_AddOrSubtractExpression )
+ | ( '<' SP? oC_AddOrSubtractExpression )
+ | ( '>' SP? oC_AddOrSubtractExpression )
+ | ( '<=' SP? oC_AddOrSubtractExpression )
+ | ( '>=' SP? oC_AddOrSubtractExpression )
+ ;
+
+oC_ParenthesizedExpression : '(' SP? oC_Expression SP? ')' ;
+
+oC_RelationshipsPattern : oC_NodePattern ( SP? oC_PatternElementChain )+ ;
+
+oC_FilterExpression : oC_IdInColl ( SP? oC_Where )? ;
+
+oC_IdInColl : oC_Variable SP IN SP oC_Expression ;
+
+oC_FunctionInvocation : oC_FunctionName SP? '(' SP? ( DISTINCT SP? )? ( oC_Expression SP? ( ',' SP? oC_Expression SP? )* )? ')' ;
+
+oC_FunctionName : ( oC_Namespace oC_SymbolicName )
+ | EXISTS
+ ;
+
+EXISTS : ( 'E' | 'e' ) ( 'X' | 'x' ) ( 'I' | 'i' ) ( 'S' | 's' ) ( 'T' | 't' ) ( 'S' | 's' ) ;
+
+oC_ExplicitProcedureInvocation : oC_ProcedureName SP? '(' SP? ( oC_Expression SP? ( ',' SP? oC_Expression SP? )* )? ')' ;
+
+oC_ImplicitProcedureInvocation : oC_ProcedureName ;
+
+oC_ProcedureResultField : oC_SymbolicName ;
+
+oC_ProcedureName : oC_Namespace oC_SymbolicName ;
+
+oC_Namespace : ( oC_SymbolicName '.' )* ;
+
+oC_ListComprehension : '[' SP? oC_FilterExpression ( SP? '|' SP? oC_Expression )? SP? ']' ;
+
+oC_PatternComprehension : '[' SP? ( oC_Variable SP? '=' SP? )? oC_RelationshipsPattern SP? ( WHERE SP? oC_Expression SP? )? '|' SP? oC_Expression SP? ']' ;
+
+oC_PropertyLookup : '.' SP? ( oC_PropertyKeyName ) ;
+
+oC_CaseExpression : ( ( CASE ( SP? oC_CaseAlternatives )+ ) | ( CASE SP? oC_Expression ( SP? oC_CaseAlternatives )+ ) ) ( SP? ELSE SP? oC_Expression )? SP? END ;
+
+CASE : ( 'C' | 'c' ) ( 'A' | 'a' ) ( 'S' | 's' ) ( 'E' | 'e' ) ;
+
+ELSE : ( 'E' | 'e' ) ( 'L' | 'l' ) ( 'S' | 's' ) ( 'E' | 'e' ) ;
+
+END : ( 'E' | 'e' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ;
+
+oC_CaseAlternatives : WHEN SP? oC_Expression SP? THEN SP? oC_Expression ;
+
+WHEN : ( 'W' | 'w' ) ( 'H' | 'h' ) ( 'E' | 'e' ) ( 'N' | 'n' ) ;
+
+THEN : ( 'T' | 't' ) ( 'H' | 'h' ) ( 'E' | 'e' ) ( 'N' | 'n' ) ;
+
+oC_Variable : oC_SymbolicName ;
+
+StringLiteral : ( '"' ( StringLiteral_0 | EscapedChar )* '"' )
+ | ( '\'' ( StringLiteral_1 | EscapedChar )* '\'' )
+ ;
+
+EscapedChar : '\\' ( '\\' | '\'' | '"' | ( 'B' | 'b' ) | ( 'F' | 'f' ) | ( 'N' | 'n' ) | ( 'R' | 'r' ) | ( 'T' | 't' ) | ( ( 'U' | 'u' ) ( HexDigit HexDigit HexDigit HexDigit ) ) | ( ( 'U' | 'u' ) ( HexDigit HexDigit HexDigit HexDigit HexDigit HexDigit HexDigit HexDigit ) ) ) ;
+
+oC_NumberLiteral : oC_DoubleLiteral
+ | oC_IntegerLiteral
+ ;
+
+oC_MapLiteral : '{' SP? ( oC_PropertyKeyName SP? ':' SP? oC_Expression SP? ( ',' SP? oC_PropertyKeyName SP? ':' SP? oC_Expression SP? )* )? '}' ;
+
+oC_Parameter : '$' ( oC_SymbolicName | DecimalInteger ) ;
+
+oC_PropertyExpression : oC_Atom ( SP? oC_PropertyLookup )+ ;
+
+oC_PropertyKeyName : oC_SchemaName ;
+
+oC_IntegerLiteral : HexInteger
+ | OctalInteger
+ | DecimalInteger
+ ;
+
+HexInteger : '0x' ( HexDigit )+ ;
+
+DecimalInteger : ZeroDigit
+ | ( NonZeroDigit ( Digit )* )
+ ;
+
+OctalInteger : ZeroDigit ( OctDigit )+ ;
+
+HexLetter : ( 'A' | 'a' )
+ | ( 'B' | 'b' )
+ | ( 'C' | 'c' )
+ | ( 'D' | 'd' )
+ | ( 'E' | 'e' )
+ | ( 'F' | 'f' )
+ ;
+
+HexDigit : Digit
+ | HexLetter
+ ;
+
+Digit : ZeroDigit
+ | NonZeroDigit
+ ;
+
+NonZeroDigit : NonZeroOctDigit
+ | '8'
+ | '9'
+ ;
+
+NonZeroOctDigit : '1'
+ | '2'
+ | '3'
+ | '4'
+ | '5'
+ | '6'
+ | '7'
+ ;
+
+OctDigit : ZeroDigit
+ | NonZeroOctDigit
+ ;
+
+ZeroDigit : '0' ;
+
+oC_DoubleLiteral : ExponentDecimalReal
+ | RegularDecimalReal
+ ;
+
+ExponentDecimalReal : ( ( Digit )+ | ( ( Digit )+ '.' ( Digit )+ ) | ( '.' ( Digit )+ ) ) ( 'E' | 'e' ) '-'? ( Digit )+ ;
+
+RegularDecimalReal : ( Digit )* '.' ( Digit )+ ;
+
+oC_SchemaName : oC_SymbolicName
+ | oC_ReservedWord
+ ;
+
+oC_SymbolicName : UnescapedSymbolicName
+ | EscapedSymbolicName
+ | HexLetter
+ | COUNT
+ | FILTER
+ | EXTRACT
+ | ANY
+ | NONE
+ | SINGLE
+ ;
+
+FILTER : ( 'F' | 'f' ) ( 'I' | 'i' ) ( 'L' | 'l' ) ( 'T' | 't' ) ( 'E' | 'e' ) ( 'R' | 'r' ) ;
+
+EXTRACT : ( 'E' | 'e' ) ( 'X' | 'x' ) ( 'T' | 't' ) ( 'R' | 'r' ) ( 'A' | 'a' ) ( 'C' | 'c' ) ( 'T' | 't' ) ;
+
+UnescapedSymbolicName : IdentifierStart ( IdentifierPart )* ;
+
+oC_ReservedWord : ALL
+ | ASC
+ | ASCENDING
+ | BY
+ | CREATE
+ | DELETE_
+ | DESC
+ | DESCENDING
+ | DETACH
+ | EXISTS
+ | LIMIT
+ | MATCH
+ | MERGE
+ | ON
+ | OPTIONAL_
+ | ORDER
+ | REMOVE
+ | RETURN
+ | SET
+ | L_SKIP
+ | WHERE
+ | WITH
+ | UNION
+ | UNWIND
+ | AND
+ | AS
+ | CONTAINS
+ | DISTINCT
+ | ENDS
+ | IN
+ | IS
+ | NOT
+ | OR
+ | STARTS
+ | XOR
+ | FALSE_
+ | TRUE_
+ | NULL_
+ | CONSTRAINT
+ | DO
+ | FOR
+ | REQUIRE
+ | UNIQUE
+ | CASE
+ | WHEN
+ | THEN
+ | ELSE
+ | END
+ | MANDATORY
+ | SCALAR
+ | OF
+ | ADD
+ | DROP
+ ;
+
+CONSTRAINT : ( 'C' | 'c' ) ( 'O' | 'o' ) ( 'N' | 'n' ) ( 'S' | 's' ) ( 'T' | 't' ) ( 'R' | 'r' ) ( 'A' | 'a' ) ( 'I' | 'i' ) ( 'N' | 'n' ) ( 'T' | 't' ) ;
+
+DO : ( 'D' | 'd' ) ( 'O' | 'o' ) ;
+
+FOR : ( 'F' | 'f' ) ( 'O' | 'o' ) ( 'R' | 'r' ) ;
+
+REQUIRE : ( 'R' | 'r' ) ( 'E' | 'e' ) ( 'Q' | 'q' ) ( 'U' | 'u' ) ( 'I' | 'i' ) ( 'R' | 'r' ) ( 'E' | 'e' ) ;
+
+UNIQUE : ( 'U' | 'u' ) ( 'N' | 'n' ) ( 'I' | 'i' ) ( 'Q' | 'q' ) ( 'U' | 'u' ) ( 'E' | 'e' ) ;
+
+MANDATORY : ( 'M' | 'm' ) ( 'A' | 'a' ) ( 'N' | 'n' ) ( 'D' | 'd' ) ( 'A' | 'a' ) ( 'T' | 't' ) ( 'O' | 'o' ) ( 'R' | 'r' ) ( 'Y' | 'y' ) ;
+
+SCALAR : ( 'S' | 's' ) ( 'C' | 'c' ) ( 'A' | 'a' ) ( 'L' | 'l' ) ( 'A' | 'a' ) ( 'R' | 'r' ) ;
+
+OF : ( 'O' | 'o' ) ( 'F' | 'f' ) ;
+
+ADD : ( 'A' | 'a' ) ( 'D' | 'd' ) ( 'D' | 'd' ) ;
+
+DROP : ( 'D' | 'd' ) ( 'R' | 'r' ) ( 'O' | 'o' ) ( 'P' | 'p' ) ;
+
+/**
+ * Based on the unicode identifier and pattern syntax
+ * (http://www.unicode.org/reports/tr31/)
+ * And extended with a few characters.
+ */
+IdentifierStart : ID_Start
+ | Pc
+ ;
+
+/**
+ * Based on the unicode identifier and pattern syntax
+ * (http://www.unicode.org/reports/tr31/)
+ * And extended with a few characters.
+ */
+IdentifierPart : ID_Continue
+ | Sc
+ ;
+
+/**
+ * Any character except "`", enclosed within `backticks`. Backticks are escaped with double backticks. */
+EscapedSymbolicName : ( '`' ( EscapedSymbolicName_0 )* '`' )+ ;
+
+SP : ( WHITESPACE )+ ;
+
+WHITESPACE : SPACE
+ | TAB
+ | LF
+ | VT
+ | FF
+ | CR
+ | FS
+ | GS
+ | RS
+ | US
+ | ' '
+ | ''
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | '
'
+ | '
'
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | ' '
+ | Comment
+ ;
+
+Comment : ( '/*' ( Comment_1 | ( '*' Comment_2 ) )* '*/' )
+ | ( '//' ( Comment_3 )* CR? ( LF | EOF ) )
+ ;
+
+oC_LeftArrowHead : '<'
+ | '⟨'
+ | '〈'
+ | '﹤'
+ | '<'
+ ;
+
+oC_RightArrowHead : '>'
+ | '⟩'
+ | '〉'
+ | '﹥'
+ | '>'
+ ;
+
+oC_Dash : '-'
+ | ''
+ | '‐'
+ | '‑'
+ | '‒'
+ | '–'
+ | '—'
+ | '―'
+ | '−'
+ | '﹘'
+ | '﹣'
+ | '-'
+ ;
+
+fragment FF : [\f] ;
+
+fragment EscapedSymbolicName_0 : [\u0000-_a-\uFFFF] ;
+
+fragment RS : [\u001E] ;
+
+fragment ID_Continue : [0-9A-Z_a-z\u00AA\u00B5\u00B7\u00BA\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02C1\u02C6-\u02D1\u02E0-\u02E4\u02EC\u02EE\u0300-\u0374\u0376-\u0377\u037A-\u037D\u0386-\u038A\u038C\u038E-\u03A1\u03A3-\u03F5\u03F7-\u0481\u0483-\u0487\u048A-\u0527\u0531-\u0556\u0559\u0561-\u0587\u0591-\u05BD\u05BF\u05C1-\u05C2\u05C4-\u05C5\u05C7\u05D0-\u05EA\u05F0-\u05F2\u0610-\u061A\u0620-\u0669\u066E-\u06D3\u06D5-\u06DC\u06DF-\u06E8\u06EA-\u06FC\u06FF\u0710-\u074A\u074D-\u07B1\u07C0-\u07F5\u07FA\u0800-\u082D\u0840-\u085B\u08A0\u08A2-\u08AC\u08E4-\u08FE\u0900-\u0963\u0966-\u096F\u0971-\u0977\u0979-\u097F\u0981-\u0983\u0985-\u098C\u098F-\u0990\u0993-\u09A8\u09AA-\u09B0\u09B2\u09B6-\u09B9\u09BC-\u09C4\u09C7-\u09C8\u09CB-\u09CE\u09D7\u09DC-\u09DD\u09DF-\u09E3\u09E6-\u09F1\u0A01-\u0A03\u0A05-\u0A0A\u0A0F-\u0A10\u0A13-\u0A28\u0A2A-\u0A30\u0A32-\u0A33\u0A35-\u0A36\u0A38-\u0A39\u0A3C\u0A3E-\u0A42\u0A47-\u0A48\u0A4B-\u0A4D\u0A51\u0A59-\u0A5C\u0A5E\u0A66-\u0A75\u0A81-\u0A83\u0A85-\u0A8D\u0A8F-\u0A91\u0A93-\u0AA8\u0AAA-\u0AB0\u0AB2-\u0AB3\u0AB5-\u0AB9\u0ABC-\u0AC5\u0AC7-\u0AC9\u0ACB-\u0ACD\u0AD0\u0AE0-\u0AE3\u0AE6-\u0AEF\u0B01-\u0B03\u0B05-\u0B0C\u0B0F-\u0B10\u0B13-\u0B28\u0B2A-\u0B30\u0B32-\u0B33\u0B35-\u0B39\u0B3C-\u0B44\u0B47-\u0B48\u0B4B-\u0B4D\u0B56-\u0B57\u0B5C-\u0B5D\u0B5F-\u0B63\u0B66-\u0B6F\u0B71\u0B82-\u0B83\u0B85-\u0B8A\u0B8E-\u0B90\u0B92-\u0B95\u0B99-\u0B9A\u0B9C\u0B9E-\u0B9F\u0BA3-\u0BA4\u0BA8-\u0BAA\u0BAE-\u0BB9\u0BBE-\u0BC2\u0BC6-\u0BC8\u0BCA-\u0BCD\u0BD0\u0BD7\u0BE6-\u0BEF\u0C01-\u0C03\u0C05-\u0C0C\u0C0E-\u0C10\u0C12-\u0C28\u0C2A-\u0C33\u0C35-\u0C39\u0C3D-\u0C44\u0C46-\u0C48\u0C4A-\u0C4D\u0C55-\u0C56\u0C58-\u0C59\u0C60-\u0C63\u0C66-\u0C6F\u0C82-\u0C83\u0C85-\u0C8C\u0C8E-\u0C90\u0C92-\u0CA8\u0CAA-\u0CB3\u0CB5-\u0CB9\u0CBC-\u0CC4\u0CC6-\u0CC8\u0CCA-\u0CCD\u0CD5-\u0CD6\u0CDE\u0CE0-\u0CE3\u0CE6-\u0CEF\u0CF1-\u0CF2\u0D02-\u0D03\u0D05-\u0D0C\u0D0E-\u0D10\u0D12-\u0D3A\u0D3D-\u0D44\u0D46-\u0D48\u0D4A-\u0D4E\u0D57\u0D60-\u0D63\u0D66-\u0D6F\u0D7A-\u0D7F\u0D82-\u0D83\u0D85-\u0D96\u0D9A-\u0DB1\u0DB3-\u0DBB\u0DBD\u0DC0-\u0DC6\u0DCA\u0DCF-\u0DD4\u0DD6\u0DD8-\u0DDF\u0DF2-\u0DF3\u0E01-\u0E3A\u0E40-\u0E4E\u0E50-\u0E59\u0E81-\u0E82\u0E84\u0E87-\u0E88\u0E8A\u0E8D\u0E94-\u0E97\u0E99-\u0E9F\u0EA1-\u0EA3\u0EA5\u0EA7\u0EAA-\u0EAB\u0EAD-\u0EB9\u0EBB-\u0EBD\u0EC0-\u0EC4\u0EC6\u0EC8-\u0ECD\u0ED0-\u0ED9\u0EDC-\u0EDF\u0F00\u0F18-\u0F19\u0F20-\u0F29\u0F35\u0F37\u0F39\u0F3E-\u0F47\u0F49-\u0F6C\u0F71-\u0F84\u0F86-\u0F97\u0F99-\u0FBC\u0FC6\u1000-\u1049\u1050-\u109D\u10A0-\u10C5\u10C7\u10CD\u10D0-\u10FA\u10FC-\u1248\u124A-\u124D\u1250-\u1256\u1258\u125A-\u125D\u1260-\u1288\u128A-\u128D\u1290-\u12B0\u12B2-\u12B5\u12B8-\u12BE\u12C0\u12C2-\u12C5\u12C8-\u12D6\u12D8-\u1310\u1312-\u1315\u1318-\u135A\u135D-\u135F\u1369-\u1371\u1380-\u138F\u13A0-\u13F4\u1401-\u166C\u166F-\u167F\u1681-\u169A\u16A0-\u16EA\u16EE-\u16F0\u1700-\u170C\u170E-\u1714\u1720-\u1734\u1740-\u1753\u1760-\u176C\u176E-\u1770\u1772-\u1773\u1780-\u17D3\u17D7\u17DC-\u17DD\u17E0-\u17E9\u180B-\u180D\u1810-\u1819\u1820-\u1877\u1880-\u18AA\u18B0-\u18F5\u1900-\u191C\u1920-\u192B\u1930-\u193B\u1946-\u196D\u1970-\u1974\u1980-\u19AB\u19B0-\u19C9\u19D0-\u19DA\u1A00-\u1A1B\u1A20-\u1A5E\u1A60-\u1A7C\u1A7F-\u1A89\u1A90-\u1A99\u1AA7\u1B00-\u1B4B\u1B50-\u1B59\u1B6B-\u1B73\u1B80-\u1BF3\u1C00-\u1C37\u1C40-\u1C49\u1C4D-\u1C7D\u1CD0-\u1CD2\u1CD4-\u1CF6\u1D00-\u1DE6\u1DFC-\u1F15\u1F18-\u1F1D\u1F20-\u1F45\u1F48-\u1F4D\u1F50-\u1F57\u1F59\u1F5B\u1F5D\u1F5F-\u1F7D\u1F80-\u1FB4\u1FB6-\u1FBC\u1FBE\u1FC2-\u1FC4\u1FC6-\u1FCC\u1FD0-\u1FD3\u1FD6-\u1FDB\u1FE0-\u1FEC\u1FF2-\u1FF4\u1FF6-\u1FFC\u203F-\u2040\u2054\u2071\u207F\u2090-\u209C\u20D0-\u20DC\u20E1\u20E5-\u20F0\u2102\u2107\u210A-\u2113\u2115\u2118-\u211D\u2124\u2126\u2128\u212A-\u2139\u213C-\u213F\u2145-\u2149\u214E\u2160-\u2188\u2C00-\u2C2E\u2C30-\u2C5E\u2C60-\u2CE4\u2CEB-\u2CF3\u2D00-\u2D25\u2D27\u2D2D\u2D30-\u2D67\u2D6F\u2D7F-\u2D96\u2DA0-\u2DA6\u2DA8-\u2DAE\u2DB0-\u2DB6\u2DB8-\u2DBE\u2DC0-\u2DC6\u2DC8-\u2DCE\u2DD0-\u2DD6\u2DD8-\u2DDE\u2DE0-\u2DFF\u3005-\u3007\u3021-\u302F\u3031-\u3035\u3038-\u303C\u3041-\u3096\u3099-\u309F\u30A1-\u30FA\u30FC-\u30FF\u3105-\u312D\u3131-\u318E\u31A0-\u31BA\u31F0-\u31FF\u3400-\u4DB5\u4E00-\u9FCC\uA000-\uA48C\uA4D0-\uA4FD\uA500-\uA60C\uA610-\uA62B\uA640-\uA66F\uA674-\uA67D\uA67F-\uA697\uA69F-\uA6F1\uA717-\uA71F\uA722-\uA788\uA78B-\uA78E\uA790-\uA793\uA7A0-\uA7AA\uA7F8-\uA827\uA840-\uA873\uA880-\uA8C4\uA8D0-\uA8D9\uA8E0-\uA8F7\uA8FB\uA900-\uA92D\uA930-\uA953\uA960-\uA97C\uA980-\uA9C0\uA9CF-\uA9D9\uAA00-\uAA36\uAA40-\uAA4D\uAA50-\uAA59\uAA60-\uAA76\uAA7A-\uAA7B\uAA80-\uAAC2\uAADB-\uAADD\uAAE0-\uAAEF\uAAF2-\uAAF6\uAB01-\uAB06\uAB09-\uAB0E\uAB11-\uAB16\uAB20-\uAB26\uAB28-\uAB2E\uABC0-\uABEA\uABEC-\uABED\uABF0-\uABF9\uAC00-\uD7A3\uD7B0-\uD7C6\uD7CB-\uD7FB\uF900-\uFA6D\uFA70-\uFAD9\uFB00-\uFB06\uFB13-\uFB17\uFB1D-\uFB28\uFB2A-\uFB36\uFB38-\uFB3C\uFB3E\uFB40-\uFB41\uFB43-\uFB44\uFB46-\uFBB1\uFBD3-\uFD3D\uFD50-\uFD8F\uFD92-\uFDC7\uFDF0-\uFDFB\uFE00-\uFE0F\uFE20-\uFE26\uFE33-\uFE34\uFE4D-\uFE4F\uFE70-\uFE74\uFE76-\uFEFC\uFF10-\uFF19\uFF21-\uFF3A\uFF3F\uFF41-\uFF5A\uFF66-\uFFBE\uFFC2-\uFFC7\uFFCA-\uFFCF\uFFD2-\uFFD7\uFFDA-\uFFDC] ;
+
+fragment Comment_1 : [\u0000-)+-\uFFFF] ;
+
+fragment StringLiteral_1 : [\u0000-&(-[\]-\uFFFF] ;
+
+fragment Comment_3 : [\u0000-\t\u000B-\f\u000E-\uFFFF] ;
+
+fragment Comment_2 : [\u0000-.0-\uFFFF] ;
+
+fragment GS : [\u001D] ;
+
+fragment FS : [\u001C] ;
+
+fragment CR : [\r] ;
+
+fragment Sc : [$\u00A2-\u00A5\u058F\u060B\u09F2-\u09F3\u09FB\u0AF1\u0BF9\u0E3F\u17DB\u20A0-\u20BA\uA838\uFDFC\uFE69\uFF04\uFFE0-\uFFE1\uFFE5-\uFFE6] ;
+
+fragment SPACE : [ ] ;
+
+fragment Pc : [_\u203F-\u2040\u2054\uFE33-\uFE34\uFE4D-\uFE4F\uFF3F] ;
+
+fragment TAB : [\t] ;
+
+fragment StringLiteral_0 : [\u0000-!#-[\]-\uFFFF] ;
+
+fragment LF : [\n] ;
+
+fragment VT : [\u000B] ;
+
+fragment US : [\u001F] ;
+
+fragment ID_Start : [A-Za-z\u00AA\u00B5\u00BA\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02C1\u02C6-\u02D1\u02E0-\u02E4\u02EC\u02EE\u0370-\u0374\u0376-\u0377\u037A-\u037D\u0386\u0388-\u038A\u038C\u038E-\u03A1\u03A3-\u03F5\u03F7-\u0481\u048A-\u0527\u0531-\u0556\u0559\u0561-\u0587\u05D0-\u05EA\u05F0-\u05F2\u0620-\u064A\u066E-\u066F\u0671-\u06D3\u06D5\u06E5-\u06E6\u06EE-\u06EF\u06FA-\u06FC\u06FF\u0710\u0712-\u072F\u074D-\u07A5\u07B1\u07CA-\u07EA\u07F4-\u07F5\u07FA\u0800-\u0815\u081A\u0824\u0828\u0840-\u0858\u08A0\u08A2-\u08AC\u0904-\u0939\u093D\u0950\u0958-\u0961\u0971-\u0977\u0979-\u097F\u0985-\u098C\u098F-\u0990\u0993-\u09A8\u09AA-\u09B0\u09B2\u09B6-\u09B9\u09BD\u09CE\u09DC-\u09DD\u09DF-\u09E1\u09F0-\u09F1\u0A05-\u0A0A\u0A0F-\u0A10\u0A13-\u0A28\u0A2A-\u0A30\u0A32-\u0A33\u0A35-\u0A36\u0A38-\u0A39\u0A59-\u0A5C\u0A5E\u0A72-\u0A74\u0A85-\u0A8D\u0A8F-\u0A91\u0A93-\u0AA8\u0AAA-\u0AB0\u0AB2-\u0AB3\u0AB5-\u0AB9\u0ABD\u0AD0\u0AE0-\u0AE1\u0B05-\u0B0C\u0B0F-\u0B10\u0B13-\u0B28\u0B2A-\u0B30\u0B32-\u0B33\u0B35-\u0B39\u0B3D\u0B5C-\u0B5D\u0B5F-\u0B61\u0B71\u0B83\u0B85-\u0B8A\u0B8E-\u0B90\u0B92-\u0B95\u0B99-\u0B9A\u0B9C\u0B9E-\u0B9F\u0BA3-\u0BA4\u0BA8-\u0BAA\u0BAE-\u0BB9\u0BD0\u0C05-\u0C0C\u0C0E-\u0C10\u0C12-\u0C28\u0C2A-\u0C33\u0C35-\u0C39\u0C3D\u0C58-\u0C59\u0C60-\u0C61\u0C85-\u0C8C\u0C8E-\u0C90\u0C92-\u0CA8\u0CAA-\u0CB3\u0CB5-\u0CB9\u0CBD\u0CDE\u0CE0-\u0CE1\u0CF1-\u0CF2\u0D05-\u0D0C\u0D0E-\u0D10\u0D12-\u0D3A\u0D3D\u0D4E\u0D60-\u0D61\u0D7A-\u0D7F\u0D85-\u0D96\u0D9A-\u0DB1\u0DB3-\u0DBB\u0DBD\u0DC0-\u0DC6\u0E01-\u0E30\u0E32-\u0E33\u0E40-\u0E46\u0E81-\u0E82\u0E84\u0E87-\u0E88\u0E8A\u0E8D\u0E94-\u0E97\u0E99-\u0E9F\u0EA1-\u0EA3\u0EA5\u0EA7\u0EAA-\u0EAB\u0EAD-\u0EB0\u0EB2-\u0EB3\u0EBD\u0EC0-\u0EC4\u0EC6\u0EDC-\u0EDF\u0F00\u0F40-\u0F47\u0F49-\u0F6C\u0F88-\u0F8C\u1000-\u102A\u103F\u1050-\u1055\u105A-\u105D\u1061\u1065-\u1066\u106E-\u1070\u1075-\u1081\u108E\u10A0-\u10C5\u10C7\u10CD\u10D0-\u10FA\u10FC-\u1248\u124A-\u124D\u1250-\u1256\u1258\u125A-\u125D\u1260-\u1288\u128A-\u128D\u1290-\u12B0\u12B2-\u12B5\u12B8-\u12BE\u12C0\u12C2-\u12C5\u12C8-\u12D6\u12D8-\u1310\u1312-\u1315\u1318-\u135A\u1380-\u138F\u13A0-\u13F4\u1401-\u166C\u166F-\u167F\u1681-\u169A\u16A0-\u16EA\u16EE-\u16F0\u1700-\u170C\u170E-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176C\u176E-\u1770\u1780-\u17B3\u17D7\u17DC\u1820-\u1877\u1880-\u18A8\u18AA\u18B0-\u18F5\u1900-\u191C\u1950-\u196D\u1970-\u1974\u1980-\u19AB\u19C1-\u19C7\u1A00-\u1A16\u1A20-\u1A54\u1AA7\u1B05-\u1B33\u1B45-\u1B4B\u1B83-\u1BA0\u1BAE-\u1BAF\u1BBA-\u1BE5\u1C00-\u1C23\u1C4D-\u1C4F\u1C5A-\u1C7D\u1CE9-\u1CEC\u1CEE-\u1CF1\u1CF5-\u1CF6\u1D00-\u1DBF\u1E00-\u1F15\u1F18-\u1F1D\u1F20-\u1F45\u1F48-\u1F4D\u1F50-\u1F57\u1F59\u1F5B\u1F5D\u1F5F-\u1F7D\u1F80-\u1FB4\u1FB6-\u1FBC\u1FBE\u1FC2-\u1FC4\u1FC6-\u1FCC\u1FD0-\u1FD3\u1FD6-\u1FDB\u1FE0-\u1FEC\u1FF2-\u1FF4\u1FF6-\u1FFC\u2071\u207F\u2090-\u209C\u2102\u2107\u210A-\u2113\u2115\u2118-\u211D\u2124\u2126\u2128\u212A-\u2139\u213C-\u213F\u2145-\u2149\u214E\u2160-\u2188\u2C00-\u2C2E\u2C30-\u2C5E\u2C60-\u2CE4\u2CEB-\u2CEE\u2CF2-\u2CF3\u2D00-\u2D25\u2D27\u2D2D\u2D30-\u2D67\u2D6F\u2D80-\u2D96\u2DA0-\u2DA6\u2DA8-\u2DAE\u2DB0-\u2DB6\u2DB8-\u2DBE\u2DC0-\u2DC6\u2DC8-\u2DCE\u2DD0-\u2DD6\u2DD8-\u2DDE\u3005-\u3007\u3021-\u3029\u3031-\u3035\u3038-\u303C\u3041-\u3096\u309B-\u309F\u30A1-\u30FA\u30FC-\u30FF\u3105-\u312D\u3131-\u318E\u31A0-\u31BA\u31F0-\u31FF\u3400-\u4DB5\u4E00-\u9FCC\uA000-\uA48C\uA4D0-\uA4FD\uA500-\uA60C\uA610-\uA61F\uA62A-\uA62B\uA640-\uA66E\uA67F-\uA697\uA6A0-\uA6EF\uA717-\uA71F\uA722-\uA788\uA78B-\uA78E\uA790-\uA793\uA7A0-\uA7AA\uA7F8-\uA801\uA803-\uA805\uA807-\uA80A\uA80C-\uA822\uA840-\uA873\uA882-\uA8B3\uA8F2-\uA8F7\uA8FB\uA90A-\uA925\uA930-\uA946\uA960-\uA97C\uA984-\uA9B2\uA9CF\uAA00-\uAA28\uAA40-\uAA42\uAA44-\uAA4B\uAA60-\uAA76\uAA7A\uAA80-\uAAAF\uAAB1\uAAB5-\uAAB6\uAAB9-\uAABD\uAAC0\uAAC2\uAADB-\uAADD\uAAE0-\uAAEA\uAAF2-\uAAF4\uAB01-\uAB06\uAB09-\uAB0E\uAB11-\uAB16\uAB20-\uAB26\uAB28-\uAB2E\uABC0-\uABE2\uAC00-\uD7A3\uD7B0-\uD7C6\uD7CB-\uD7FB\uF900-\uFA6D\uFA70-\uFAD9\uFB00-\uFB06\uFB13-\uFB17\uFB1D\uFB1F-\uFB28\uFB2A-\uFB36\uFB38-\uFB3C\uFB3E\uFB40-\uFB41\uFB43-\uFB44\uFB46-\uFBB1\uFBD3-\uFD3D\uFD50-\uFD8F\uFD92-\uFDC7\uFDF0-\uFDFB\uFE70-\uFE74\uFE76-\uFEFC\uFF21-\uFF3A\uFF41-\uFF5A\uFF66-\uFFBE\uFFC2-\uFFC7\uFFCA-\uFFCF\uFFD2-\uFFD7\uFFDA-\uFFDC] ;
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.interp b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.interp
new file mode 100644
index 00000000..fec4e9cd
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.interp
@@ -0,0 +1,377 @@
+token literal names:
+null
+';'
+','
+'='
+'+='
+'*'
+'('
+')'
+'['
+']'
+':'
+'|'
+'..'
+'+'
+'-'
+'/'
+'%'
+'^'
+'<>'
+'<'
+'>'
+'<='
+'>='
+'.'
+'{'
+'}'
+'$'
+'⟨'
+'〈'
+'﹤'
+'<'
+'⟩'
+'〉'
+'﹥'
+'>'
+''
+'‐'
+'‑'
+'‒'
+'–'
+'—'
+'―'
+'−'
+'﹘'
+'﹣'
+'-'
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+'0'
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+
+token symbolic names:
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+EXPLAIN
+PROFILE
+UNION
+ALL
+OPTIONAL_
+MATCH
+UNWIND
+AS
+MERGE
+ON
+CREATE
+SET
+DETACH
+DELETE_
+REMOVE
+CALL
+YIELD
+WITH
+DISTINCT
+RETURN
+ORDER
+BY
+L_SKIP
+LIMIT
+ASCENDING
+ASC
+DESCENDING
+DESC
+USING
+JOIN
+START
+WHERE
+OR
+XOR
+AND
+NOT
+IN
+STARTS
+ENDS
+CONTAINS
+REGEXP
+IS
+NULL_
+COUNT
+ANY
+NONE
+SINGLE
+TRUE_
+FALSE_
+EXISTS
+CASE
+ELSE
+END
+WHEN
+THEN
+StringLiteral
+EscapedChar
+HexInteger
+DecimalInteger
+OctalInteger
+HexLetter
+HexDigit
+Digit
+NonZeroDigit
+NonZeroOctDigit
+OctDigit
+ZeroDigit
+ExponentDecimalReal
+RegularDecimalReal
+FILTER
+EXTRACT
+UnescapedSymbolicName
+CONSTRAINT
+DO
+FOR
+REQUIRE
+UNIQUE
+MANDATORY
+SCALAR
+OF
+ADD
+DROP
+IdentifierStart
+IdentifierPart
+EscapedSymbolicName
+SP
+WHITESPACE
+Comment
+
+rule names:
+oC_Cypher
+oC_Statement
+oC_Query
+oC_RegularQuery
+oC_Union
+oC_SingleQuery
+oC_SinglePartQuery
+oC_MultiPartQuery
+oC_UpdatingClause
+oC_ReadingClause
+oC_Match
+oC_Unwind
+oC_Merge
+oC_MergeAction
+oC_Create
+oC_Set
+oC_SetItem
+oC_Delete
+oC_Remove
+oC_RemoveItem
+oC_InQueryCall
+oC_StandaloneCall
+oC_YieldItems
+oC_YieldItem
+oC_With
+oC_Return
+oC_ReturnBody
+oC_ReturnItems
+oC_ReturnItem
+oC_Order
+oC_Skip
+oC_Limit
+oC_SortItem
+oC_Hint
+oC_Where
+oC_Pattern
+oC_PatternPart
+oC_AnonymousPatternPart
+oC_PatternElement
+oC_NodePattern
+oC_PatternElementChain
+oC_RelationshipPattern
+oC_RelationshipDetail
+oC_Properties
+oC_RelationshipTypes
+oC_NodeLabels
+oC_NodeLabel
+oC_RangeLiteral
+oC_LabelName
+oC_RelTypeName
+oC_Expression
+oC_OrExpression
+oC_XorExpression
+oC_AndExpression
+oC_NotExpression
+oC_ComparisonExpression
+oC_AddOrSubtractExpression
+oC_MultiplyDivideModuloExpression
+oC_PowerOfExpression
+oC_UnaryAddOrSubtractExpression
+oC_StringListNullOperatorExpression
+oC_ListOperatorExpression
+oC_StringOperatorExpression
+oC_NullOperatorExpression
+oC_PropertyOrLabelsExpression
+oC_Atom
+oC_Literal
+oC_BooleanLiteral
+oC_ListLiteral
+oC_PartialComparisonExpression
+oC_ParenthesizedExpression
+oC_RelationshipsPattern
+oC_FilterExpression
+oC_IdInColl
+oC_FunctionInvocation
+oC_FunctionName
+oC_ExplicitProcedureInvocation
+oC_ImplicitProcedureInvocation
+oC_ProcedureResultField
+oC_ProcedureName
+oC_Namespace
+oC_ListComprehension
+oC_PatternComprehension
+oC_PropertyLookup
+oC_CaseExpression
+oC_CaseAlternatives
+oC_Variable
+oC_NumberLiteral
+oC_MapLiteral
+oC_Parameter
+oC_PropertyExpression
+oC_PropertyKeyName
+oC_IntegerLiteral
+oC_DoubleLiteral
+oC_SchemaName
+oC_SymbolicName
+oC_ReservedWord
+oC_LeftArrowHead
+oC_RightArrowHead
+oC_Dash
+
+
+atn:
+[4, 1, 133, 1594, 2, 0, 7, 0, 2, 1, 7, 1, 2, 2, 7, 2, 2, 3, 7, 3, 2, 4, 7, 4, 2, 5, 7, 5, 2, 6, 7, 6, 2, 7, 7, 7, 2, 8, 7, 8, 2, 9, 7, 9, 2, 10, 7, 10, 2, 11, 7, 11, 2, 12, 7, 12, 2, 13, 7, 13, 2, 14, 7, 14, 2, 15, 7, 15, 2, 16, 7, 16, 2, 17, 7, 17, 2, 18, 7, 18, 2, 19, 7, 19, 2, 20, 7, 20, 2, 21, 7, 21, 2, 22, 7, 22, 2, 23, 7, 23, 2, 24, 7, 24, 2, 25, 7, 25, 2, 26, 7, 26, 2, 27, 7, 27, 2, 28, 7, 28, 2, 29, 7, 29, 2, 30, 7, 30, 2, 31, 7, 31, 2, 32, 7, 32, 2, 33, 7, 33, 2, 34, 7, 34, 2, 35, 7, 35, 2, 36, 7, 36, 2, 37, 7, 37, 2, 38, 7, 38, 2, 39, 7, 39, 2, 40, 7, 40, 2, 41, 7, 41, 2, 42, 7, 42, 2, 43, 7, 43, 2, 44, 7, 44, 2, 45, 7, 45, 2, 46, 7, 46, 2, 47, 7, 47, 2, 48, 7, 48, 2, 49, 7, 49, 2, 50, 7, 50, 2, 51, 7, 51, 2, 52, 7, 52, 2, 53, 7, 53, 2, 54, 7, 54, 2, 55, 7, 55, 2, 56, 7, 56, 2, 57, 7, 57, 2, 58, 7, 58, 2, 59, 7, 59, 2, 60, 7, 60, 2, 61, 7, 61, 2, 62, 7, 62, 2, 63, 7, 63, 2, 64, 7, 64, 2, 65, 7, 65, 2, 66, 7, 66, 2, 67, 7, 67, 2, 68, 7, 68, 2, 69, 7, 69, 2, 70, 7, 70, 2, 71, 7, 71, 2, 72, 7, 72, 2, 73, 7, 73, 2, 74, 7, 74, 2, 75, 7, 75, 2, 76, 7, 76, 2, 77, 7, 77, 2, 78, 7, 78, 2, 79, 7, 79, 2, 80, 7, 80, 2, 81, 7, 81, 2, 82, 7, 82, 2, 83, 7, 83, 2, 84, 7, 84, 2, 85, 7, 85, 2, 86, 7, 86, 2, 87, 7, 87, 2, 88, 7, 88, 2, 89, 7, 89, 2, 90, 7, 90, 2, 91, 7, 91, 2, 92, 7, 92, 2, 93, 7, 93, 2, 94, 7, 94, 2, 95, 7, 95, 2, 96, 7, 96, 2, 97, 7, 97, 2, 98, 7, 98, 2, 99, 7, 99, 1, 0, 3, 0, 202, 8, 0, 1, 0, 1, 0, 3, 0, 206, 8, 0, 1, 0, 3, 0, 209, 8, 0, 1, 0, 3, 0, 212, 8, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 3, 1, 219, 8, 1, 1, 1, 1, 1, 1, 1, 3, 1, 224, 8, 1, 1, 1, 3, 1, 227, 8, 1, 1, 2, 1, 2, 3, 2, 231, 8, 2, 1, 3, 1, 3, 3, 3, 235, 8, 3, 1, 3, 5, 3, 238, 8, 3, 10, 3, 12, 3, 241, 9, 3, 1, 4, 1, 4, 1, 4, 1, 4, 3, 4, 247, 8, 4, 1, 4, 1, 4, 1, 4, 3, 4, 252, 8, 4, 1, 4, 3, 4, 255, 8, 4, 1, 5, 1, 5, 3, 5, 259, 8, 5, 1, 6, 1, 6, 3, 6, 263, 8, 6, 5, 6, 265, 8, 6, 10, 6, 12, 6, 268, 9, 6, 1, 6, 1, 6, 1, 6, 3, 6, 273, 8, 6, 5, 6, 275, 8, 6, 10, 6, 12, 6, 278, 9, 6, 1, 6, 1, 6, 3, 6, 282, 8, 6, 1, 6, 5, 6, 285, 8, 6, 10, 6, 12, 6, 288, 9, 6, 1, 6, 3, 6, 291, 8, 6, 1, 6, 3, 6, 294, 8, 6, 3, 6, 296, 8, 6, 1, 7, 1, 7, 3, 7, 300, 8, 7, 5, 7, 302, 8, 7, 10, 7, 12, 7, 305, 9, 7, 1, 7, 1, 7, 3, 7, 309, 8, 7, 5, 7, 311, 8, 7, 10, 7, 12, 7, 314, 9, 7, 1, 7, 1, 7, 3, 7, 318, 8, 7, 4, 7, 320, 8, 7, 11, 7, 12, 7, 321, 1, 7, 1, 7, 1, 8, 1, 8, 1, 8, 1, 8, 1, 8, 3, 8, 331, 8, 8, 1, 9, 1, 9, 1, 9, 3, 9, 336, 8, 9, 1, 10, 1, 10, 3, 10, 340, 8, 10, 1, 10, 1, 10, 3, 10, 344, 8, 10, 1, 10, 1, 10, 3, 10, 348, 8, 10, 1, 10, 5, 10, 351, 8, 10, 10, 10, 12, 10, 354, 9, 10, 1, 10, 3, 10, 357, 8, 10, 1, 10, 3, 10, 360, 8, 10, 1, 11, 1, 11, 3, 11, 364, 8, 11, 1, 11, 1, 11, 1, 11, 1, 11, 1, 11, 1, 11, 1, 12, 1, 12, 3, 12, 374, 8, 12, 1, 12, 1, 12, 1, 12, 5, 12, 379, 8, 12, 10, 12, 12, 12, 382, 9, 12, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 1, 13, 3, 13, 394, 8, 13, 1, 14, 1, 14, 3, 14, 398, 8, 14, 1, 14, 1, 14, 1, 15, 1, 15, 3, 15, 404, 8, 15, 1, 15, 1, 15, 3, 15, 408, 8, 15, 1, 15, 1, 15, 3, 15, 412, 8, 15, 1, 15, 5, 15, 415, 8, 15, 10, 15, 12, 15, 418, 9, 15, 1, 16, 1, 16, 3, 16, 422, 8, 16, 1, 16, 1, 16, 3, 16, 426, 8, 16, 1, 16, 1, 16, 1, 16, 1, 16, 3, 16, 432, 8, 16, 1, 16, 1, 16, 3, 16, 436, 8, 16, 1, 16, 1, 16, 1, 16, 1, 16, 3, 16, 442, 8, 16, 1, 16, 1, 16, 3, 16, 446, 8, 16, 1, 16, 1, 16, 1, 16, 1, 16, 3, 16, 452, 8, 16, 1, 16, 1, 16, 3, 16, 456, 8, 16, 1, 17, 1, 17, 3, 17, 460, 8, 17, 1, 17, 1, 17, 3, 17, 464, 8, 17, 1, 17, 1, 17, 3, 17, 468, 8, 17, 1, 17, 1, 17, 3, 17, 472, 8, 17, 1, 17, 5, 17, 475, 8, 17, 10, 17, 12, 17, 478, 9, 17, 1, 18, 1, 18, 1, 18, 1, 18, 3, 18, 484, 8, 18, 1, 18, 1, 18, 3, 18, 488, 8, 18, 1, 18, 5, 18, 491, 8, 18, 10, 18, 12, 18, 494, 9, 18, 1, 19, 1, 19, 1, 19, 1, 19, 3, 19, 500, 8, 19, 1, 20, 1, 20, 1, 20, 1, 20, 3, 20, 506, 8, 20, 1, 20, 1, 20, 1, 20, 3, 20, 511, 8, 20, 1, 21, 1, 21, 1, 21, 1, 21, 3, 21, 517, 8, 21, 1, 21, 1, 21, 1, 21, 1, 21, 3, 21, 523, 8, 21, 1, 22, 1, 22, 1, 22, 3, 22, 528, 8, 22, 1, 22, 1, 22, 3, 22, 532, 8, 22, 1, 22, 5, 22, 535, 8, 22, 10, 22, 12, 22, 538, 9, 22, 3, 22, 540, 8, 22, 1, 22, 3, 22, 543, 8, 22, 1, 22, 3, 22, 546, 8, 22, 1, 23, 1, 23, 1, 23, 1, 23, 1, 23, 3, 23, 553, 8, 23, 1, 23, 1, 23, 1, 24, 1, 24, 3, 24, 559, 8, 24, 1, 24, 3, 24, 562, 8, 24, 1, 24, 1, 24, 1, 24, 3, 24, 567, 8, 24, 1, 24, 3, 24, 570, 8, 24, 1, 25, 1, 25, 3, 25, 574, 8, 25, 1, 25, 3, 25, 577, 8, 25, 1, 25, 1, 25, 1, 25, 1, 26, 1, 26, 1, 26, 3, 26, 585, 8, 26, 1, 26, 1, 26, 3, 26, 589, 8, 26, 1, 26, 1, 26, 3, 26, 593, 8, 26, 1, 27, 1, 27, 3, 27, 597, 8, 27, 1, 27, 1, 27, 3, 27, 601, 8, 27, 1, 27, 5, 27, 604, 8, 27, 10, 27, 12, 27, 607, 9, 27, 1, 27, 1, 27, 3, 27, 611, 8, 27, 1, 27, 1, 27, 3, 27, 615, 8, 27, 1, 27, 5, 27, 618, 8, 27, 10, 27, 12, 27, 621, 9, 27, 3, 27, 623, 8, 27, 1, 28, 1, 28, 1, 28, 1, 28, 1, 28, 1, 28, 1, 28, 3, 28, 632, 8, 28, 1, 29, 1, 29, 1, 29, 1, 29, 1, 29, 1, 29, 1, 29, 3, 29, 641, 8, 29, 1, 29, 5, 29, 644, 8, 29, 10, 29, 12, 29, 647, 9, 29, 1, 30, 1, 30, 1, 30, 1, 30, 1, 31, 1, 31, 1, 31, 1, 31, 1, 32, 1, 32, 3, 32, 659, 8, 32, 1, 32, 3, 32, 662, 8, 32, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 1, 33, 3, 33, 678, 8, 33, 1, 34, 1, 34, 1, 34, 1, 34, 1, 35, 1, 35, 3, 35, 686, 8, 35, 1, 35, 1, 35, 3, 35, 690, 8, 35, 1, 35, 5, 35, 693, 8, 35, 10, 35, 12, 35, 696, 9, 35, 1, 36, 1, 36, 3, 36, 700, 8, 36, 1, 36, 1, 36, 3, 36, 704, 8, 36, 1, 36, 1, 36, 1, 36, 3, 36, 709, 8, 36, 1, 37, 1, 37, 1, 38, 1, 38, 3, 38, 715, 8, 38, 1, 38, 5, 38, 718, 8, 38, 10, 38, 12, 38, 721, 9, 38, 1, 38, 1, 38, 1, 38, 1, 38, 3, 38, 727, 8, 38, 1, 39, 1, 39, 3, 39, 731, 8, 39, 1, 39, 1, 39, 3, 39, 735, 8, 39, 3, 39, 737, 8, 39, 1, 39, 1, 39, 3, 39, 741, 8, 39, 3, 39, 743, 8, 39, 1, 39, 1, 39, 3, 39, 747, 8, 39, 3, 39, 749, 8, 39, 1, 39, 1, 39, 1, 40, 1, 40, 3, 40, 755, 8, 40, 1, 40, 1, 40, 1, 41, 1, 41, 3, 41, 761, 8, 41, 1, 41, 1, 41, 3, 41, 765, 8, 41, 1, 41, 3, 41, 768, 8, 41, 1, 41, 3, 41, 771, 8, 41, 1, 41, 1, 41, 3, 41, 775, 8, 41, 1, 41, 1, 41, 1, 41, 1, 41, 3, 41, 781, 8, 41, 1, 41, 1, 41, 3, 41, 785, 8, 41, 1, 41, 3, 41, 788, 8, 41, 1, 41, 3, 41, 791, 8, 41, 1, 41, 1, 41, 1, 41, 1, 41, 3, 41, 797, 8, 41, 1, 41, 3, 41, 800, 8, 41, 1, 41, 3, 41, 803, 8, 41, 1, 41, 1, 41, 3, 41, 807, 8, 41, 1, 41, 1, 41, 1, 41, 1, 41, 3, 41, 813, 8, 41, 1, 41, 3, 41, 816, 8, 41, 1, 41, 3, 41, 819, 8, 41, 1, 41, 1, 41, 3, 41, 823, 8, 41, 1, 42, 1, 42, 3, 42, 827, 8, 42, 1, 42, 1, 42, 3, 42, 831, 8, 42, 3, 42, 833, 8, 42, 1, 42, 1, 42, 3, 42, 837, 8, 42, 3, 42, 839, 8, 42, 1, 42, 3, 42, 842, 8, 42, 1, 42, 1, 42, 3, 42, 846, 8, 42, 3, 42, 848, 8, 42, 1, 42, 1, 42, 1, 43, 1, 43, 3, 43, 854, 8, 43, 1, 44, 1, 44, 3, 44, 858, 8, 44, 1, 44, 1, 44, 3, 44, 862, 8, 44, 1, 44, 1, 44, 3, 44, 866, 8, 44, 1, 44, 3, 44, 869, 8, 44, 1, 44, 5, 44, 872, 8, 44, 10, 44, 12, 44, 875, 9, 44, 1, 45, 1, 45, 3, 45, 879, 8, 45, 1, 45, 5, 45, 882, 8, 45, 10, 45, 12, 45, 885, 9, 45, 1, 46, 1, 46, 3, 46, 889, 8, 46, 1, 46, 1, 46, 1, 47, 1, 47, 3, 47, 895, 8, 47, 1, 47, 1, 47, 3, 47, 899, 8, 47, 3, 47, 901, 8, 47, 1, 47, 1, 47, 3, 47, 905, 8, 47, 1, 47, 1, 47, 3, 47, 909, 8, 47, 3, 47, 911, 8, 47, 3, 47, 913, 8, 47, 1, 48, 1, 48, 1, 49, 1, 49, 1, 50, 1, 50, 1, 51, 1, 51, 1, 51, 1, 51, 1, 51, 5, 51, 926, 8, 51, 10, 51, 12, 51, 929, 9, 51, 1, 52, 1, 52, 1, 52, 1, 52, 1, 52, 5, 52, 936, 8, 52, 10, 52, 12, 52, 939, 9, 52, 1, 53, 1, 53, 1, 53, 1, 53, 1, 53, 5, 53, 946, 8, 53, 10, 53, 12, 53, 949, 9, 53, 1, 54, 1, 54, 3, 54, 953, 8, 54, 5, 54, 955, 8, 54, 10, 54, 12, 54, 958, 9, 54, 1, 54, 1, 54, 1, 55, 1, 55, 3, 55, 964, 8, 55, 1, 55, 5, 55, 967, 8, 55, 10, 55, 12, 55, 970, 9, 55, 1, 56, 1, 56, 3, 56, 974, 8, 56, 1, 56, 1, 56, 3, 56, 978, 8, 56, 1, 56, 1, 56, 3, 56, 982, 8, 56, 1, 56, 1, 56, 3, 56, 986, 8, 56, 1, 56, 5, 56, 989, 8, 56, 10, 56, 12, 56, 992, 9, 56, 1, 57, 1, 57, 3, 57, 996, 8, 57, 1, 57, 1, 57, 3, 57, 1000, 8, 57, 1, 57, 1, 57, 3, 57, 1004, 8, 57, 1, 57, 1, 57, 3, 57, 1008, 8, 57, 1, 57, 1, 57, 3, 57, 1012, 8, 57, 1, 57, 1, 57, 3, 57, 1016, 8, 57, 1, 57, 5, 57, 1019, 8, 57, 10, 57, 12, 57, 1022, 9, 57, 1, 58, 1, 58, 3, 58, 1026, 8, 58, 1, 58, 1, 58, 3, 58, 1030, 8, 58, 1, 58, 5, 58, 1033, 8, 58, 10, 58, 12, 58, 1036, 9, 58, 1, 59, 1, 59, 3, 59, 1040, 8, 59, 5, 59, 1042, 8, 59, 10, 59, 12, 59, 1045, 9, 59, 1, 59, 1, 59, 1, 60, 1, 60, 1, 60, 1, 60, 5, 60, 1053, 8, 60, 10, 60, 12, 60, 1056, 9, 60, 1, 61, 1, 61, 1, 61, 3, 61, 1061, 8, 61, 1, 61, 1, 61, 3, 61, 1065, 8, 61, 1, 61, 1, 61, 1, 61, 1, 61, 1, 61, 3, 61, 1072, 8, 61, 1, 61, 1, 61, 3, 61, 1076, 8, 61, 1, 61, 1, 61, 3, 61, 1080, 8, 61, 1, 61, 3, 61, 1083, 8, 61, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 3, 62, 1097, 8, 62, 1, 62, 3, 62, 1100, 8, 62, 1, 62, 1, 62, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 3, 63, 1114, 8, 63, 1, 64, 1, 64, 3, 64, 1118, 8, 64, 1, 64, 5, 64, 1121, 8, 64, 10, 64, 12, 64, 1124, 9, 64, 1, 64, 3, 64, 1127, 8, 64, 1, 64, 3, 64, 1130, 8, 64, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1137, 8, 65, 1, 65, 1, 65, 3, 65, 1141, 8, 65, 1, 65, 1, 65, 3, 65, 1145, 8, 65, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1152, 8, 65, 1, 65, 1, 65, 3, 65, 1156, 8, 65, 1, 65, 1, 65, 3, 65, 1160, 8, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1166, 8, 65, 1, 65, 1, 65, 3, 65, 1170, 8, 65, 1, 65, 1, 65, 3, 65, 1174, 8, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1180, 8, 65, 1, 65, 1, 65, 3, 65, 1184, 8, 65, 1, 65, 1, 65, 3, 65, 1188, 8, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1194, 8, 65, 1, 65, 1, 65, 3, 65, 1198, 8, 65, 1, 65, 1, 65, 3, 65, 1202, 8, 65, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 3, 65, 1210, 8, 65, 1, 66, 1, 66, 1, 66, 1, 66, 1, 66, 1, 66, 3, 66, 1218, 8, 66, 1, 67, 1, 67, 1, 68, 1, 68, 3, 68, 1224, 8, 68, 1, 68, 1, 68, 3, 68, 1228, 8, 68, 1, 68, 1, 68, 3, 68, 1232, 8, 68, 1, 68, 1, 68, 3, 68, 1236, 8, 68, 5, 68, 1238, 8, 68, 10, 68, 12, 68, 1241, 9, 68, 3, 68, 1243, 8, 68, 1, 68, 1, 68, 1, 69, 1, 69, 3, 69, 1249, 8, 69, 1, 69, 1, 69, 1, 69, 3, 69, 1254, 8, 69, 1, 69, 1, 69, 1, 69, 3, 69, 1259, 8, 69, 1, 69, 1, 69, 1, 69, 3, 69, 1264, 8, 69, 1, 69, 1, 69, 1, 69, 3, 69, 1269, 8, 69, 1, 69, 1, 69, 1, 69, 3, 69, 1274, 8, 69, 1, 69, 3, 69, 1277, 8, 69, 1, 70, 1, 70, 3, 70, 1281, 8, 70, 1, 70, 1, 70, 3, 70, 1285, 8, 70, 1, 70, 1, 70, 1, 71, 1, 71, 3, 71, 1291, 8, 71, 1, 71, 4, 71, 1294, 8, 71, 11, 71, 12, 71, 1295, 1, 72, 1, 72, 3, 72, 1300, 8, 72, 1, 72, 3, 72, 1303, 8, 72, 1, 73, 1, 73, 1, 73, 1, 73, 1, 73, 1, 73, 1, 74, 1, 74, 3, 74, 1313, 8, 74, 1, 74, 1, 74, 3, 74, 1317, 8, 74, 1, 74, 1, 74, 3, 74, 1321, 8, 74, 3, 74, 1323, 8, 74, 1, 74, 1, 74, 3, 74, 1327, 8, 74, 1, 74, 1, 74, 3, 74, 1331, 8, 74, 1, 74, 1, 74, 3, 74, 1335, 8, 74, 5, 74, 1337, 8, 74, 10, 74, 12, 74, 1340, 9, 74, 3, 74, 1342, 8, 74, 1, 74, 1, 74, 1, 75, 1, 75, 1, 75, 1, 75, 3, 75, 1350, 8, 75, 1, 76, 1, 76, 3, 76, 1354, 8, 76, 1, 76, 1, 76, 3, 76, 1358, 8, 76, 1, 76, 1, 76, 3, 76, 1362, 8, 76, 1, 76, 1, 76, 3, 76, 1366, 8, 76, 1, 76, 1, 76, 3, 76, 1370, 8, 76, 5, 76, 1372, 8, 76, 10, 76, 12, 76, 1375, 9, 76, 3, 76, 1377, 8, 76, 1, 76, 1, 76, 1, 77, 1, 77, 1, 78, 1, 78, 1, 79, 1, 79, 1, 79, 1, 80, 1, 80, 1, 80, 5, 80, 1391, 8, 80, 10, 80, 12, 80, 1394, 9, 80, 1, 81, 1, 81, 3, 81, 1398, 8, 81, 1, 81, 1, 81, 3, 81, 1402, 8, 81, 1, 81, 1, 81, 3, 81, 1406, 8, 81, 1, 81, 3, 81, 1409, 8, 81, 1, 81, 3, 81, 1412, 8, 81, 1, 81, 1, 81, 1, 82, 1, 82, 3, 82, 1418, 8, 82, 1, 82, 1, 82, 3, 82, 1422, 8, 82, 1, 82, 1, 82, 3, 82, 1426, 8, 82, 3, 82, 1428, 8, 82, 1, 82, 1, 82, 3, 82, 1432, 8, 82, 1, 82, 1, 82, 3, 82, 1436, 8, 82, 1, 82, 1, 82, 3, 82, 1440, 8, 82, 3, 82, 1442, 8, 82, 1, 82, 1, 82, 3, 82, 1446, 8, 82, 1, 82, 1, 82, 3, 82, 1450, 8, 82, 1, 82, 1, 82, 1, 83, 1, 83, 3, 83, 1456, 8, 83, 1, 83, 1, 83, 1, 84, 1, 84, 3, 84, 1462, 8, 84, 1, 84, 4, 84, 1465, 8, 84, 11, 84, 12, 84, 1466, 1, 84, 1, 84, 3, 84, 1471, 8, 84, 1, 84, 1, 84, 3, 84, 1475, 8, 84, 1, 84, 4, 84, 1478, 8, 84, 11, 84, 12, 84, 1479, 3, 84, 1482, 8, 84, 1, 84, 3, 84, 1485, 8, 84, 1, 84, 1, 84, 3, 84, 1489, 8, 84, 1, 84, 3, 84, 1492, 8, 84, 1, 84, 3, 84, 1495, 8, 84, 1, 84, 1, 84, 1, 85, 1, 85, 3, 85, 1501, 8, 85, 1, 85, 1, 85, 3, 85, 1505, 8, 85, 1, 85, 1, 85, 3, 85, 1509, 8, 85, 1, 85, 1, 85, 1, 86, 1, 86, 1, 87, 1, 87, 3, 87, 1517, 8, 87, 1, 88, 1, 88, 3, 88, 1521, 8, 88, 1, 88, 1, 88, 3, 88, 1525, 8, 88, 1, 88, 1, 88, 3, 88, 1529, 8, 88, 1, 88, 1, 88, 3, 88, 1533, 8, 88, 1, 88, 1, 88, 3, 88, 1537, 8, 88, 1, 88, 1, 88, 3, 88, 1541, 8, 88, 1, 88, 1, 88, 3, 88, 1545, 8, 88, 1, 88, 1, 88, 3, 88, 1549, 8, 88, 5, 88, 1551, 8, 88, 10, 88, 12, 88, 1554, 9, 88, 3, 88, 1556, 8, 88, 1, 88, 1, 88, 1, 89, 1, 89, 1, 89, 3, 89, 1563, 8, 89, 1, 90, 1, 90, 3, 90, 1567, 8, 90, 1, 90, 4, 90, 1570, 8, 90, 11, 90, 12, 90, 1571, 1, 91, 1, 91, 1, 92, 1, 92, 1, 93, 1, 93, 1, 94, 1, 94, 3, 94, 1582, 8, 94, 1, 95, 1, 95, 1, 96, 1, 96, 1, 97, 1, 97, 1, 98, 1, 98, 1, 99, 1, 99, 1, 99, 0, 0, 100, 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 0, 10, 1, 0, 70, 73, 1, 0, 13, 14, 1, 0, 93, 94, 1, 0, 103, 105, 1, 0, 113, 114, 4, 0, 89, 92, 106, 106, 115, 117, 130, 130, 6, 0, 48, 60, 63, 73, 77, 85, 87, 88, 93, 100, 118, 127, 2, 0, 19, 19, 27, 30, 2, 0, 20, 20, 31, 34, 2, 0, 14, 14, 35, 45, 1820, 0, 201, 1, 0, 0, 0, 2, 226, 1, 0, 0, 0, 4, 230, 1, 0, 0, 0, 6, 232, 1, 0, 0, 0, 8, 254, 1, 0, 0, 0, 10, 258, 1, 0, 0, 0, 12, 295, 1, 0, 0, 0, 14, 319, 1, 0, 0, 0, 16, 330, 1, 0, 0, 0, 18, 335, 1, 0, 0, 0, 20, 339, 1, 0, 0, 0, 22, 361, 1, 0, 0, 0, 24, 371, 1, 0, 0, 0, 26, 393, 1, 0, 0, 0, 28, 395, 1, 0, 0, 0, 30, 401, 1, 0, 0, 0, 32, 455, 1, 0, 0, 0, 34, 459, 1, 0, 0, 0, 36, 479, 1, 0, 0, 0, 38, 499, 1, 0, 0, 0, 40, 501, 1, 0, 0, 0, 42, 512, 1, 0, 0, 0, 44, 539, 1, 0, 0, 0, 46, 552, 1, 0, 0, 0, 48, 556, 1, 0, 0, 0, 50, 571, 1, 0, 0, 0, 52, 581, 1, 0, 0, 0, 54, 622, 1, 0, 0, 0, 56, 631, 1, 0, 0, 0, 58, 633, 1, 0, 0, 0, 60, 648, 1, 0, 0, 0, 62, 652, 1, 0, 0, 0, 64, 656, 1, 0, 0, 0, 66, 677, 1, 0, 0, 0, 68, 679, 1, 0, 0, 0, 70, 683, 1, 0, 0, 0, 72, 708, 1, 0, 0, 0, 74, 710, 1, 0, 0, 0, 76, 726, 1, 0, 0, 0, 78, 728, 1, 0, 0, 0, 80, 752, 1, 0, 0, 0, 82, 822, 1, 0, 0, 0, 84, 824, 1, 0, 0, 0, 86, 853, 1, 0, 0, 0, 88, 855, 1, 0, 0, 0, 90, 876, 1, 0, 0, 0, 92, 886, 1, 0, 0, 0, 94, 892, 1, 0, 0, 0, 96, 914, 1, 0, 0, 0, 98, 916, 1, 0, 0, 0, 100, 918, 1, 0, 0, 0, 102, 920, 1, 0, 0, 0, 104, 930, 1, 0, 0, 0, 106, 940, 1, 0, 0, 0, 108, 956, 1, 0, 0, 0, 110, 961, 1, 0, 0, 0, 112, 971, 1, 0, 0, 0, 114, 993, 1, 0, 0, 0, 116, 1023, 1, 0, 0, 0, 118, 1043, 1, 0, 0, 0, 120, 1048, 1, 0, 0, 0, 122, 1082, 1, 0, 0, 0, 124, 1096, 1, 0, 0, 0, 126, 1113, 1, 0, 0, 0, 128, 1115, 1, 0, 0, 0, 130, 1209, 1, 0, 0, 0, 132, 1217, 1, 0, 0, 0, 134, 1219, 1, 0, 0, 0, 136, 1221, 1, 0, 0, 0, 138, 1276, 1, 0, 0, 0, 140, 1278, 1, 0, 0, 0, 142, 1288, 1, 0, 0, 0, 144, 1297, 1, 0, 0, 0, 146, 1304, 1, 0, 0, 0, 148, 1310, 1, 0, 0, 0, 150, 1349, 1, 0, 0, 0, 152, 1351, 1, 0, 0, 0, 154, 1380, 1, 0, 0, 0, 156, 1382, 1, 0, 0, 0, 158, 1384, 1, 0, 0, 0, 160, 1392, 1, 0, 0, 0, 162, 1395, 1, 0, 0, 0, 164, 1415, 1, 0, 0, 0, 166, 1453, 1, 0, 0, 0, 168, 1481, 1, 0, 0, 0, 170, 1498, 1, 0, 0, 0, 172, 1512, 1, 0, 0, 0, 174, 1516, 1, 0, 0, 0, 176, 1518, 1, 0, 0, 0, 178, 1559, 1, 0, 0, 0, 180, 1564, 1, 0, 0, 0, 182, 1573, 1, 0, 0, 0, 184, 1575, 1, 0, 0, 0, 186, 1577, 1, 0, 0, 0, 188, 1581, 1, 0, 0, 0, 190, 1583, 1, 0, 0, 0, 192, 1585, 1, 0, 0, 0, 194, 1587, 1, 0, 0, 0, 196, 1589, 1, 0, 0, 0, 198, 1591, 1, 0, 0, 0, 200, 202, 5, 131, 0, 0, 201, 200, 1, 0, 0, 0, 201, 202, 1, 0, 0, 0, 202, 203, 1, 0, 0, 0, 203, 208, 3, 2, 1, 0, 204, 206, 5, 131, 0, 0, 205, 204, 1, 0, 0, 0, 205, 206, 1, 0, 0, 0, 206, 207, 1, 0, 0, 0, 207, 209, 5, 1, 0, 0, 208, 205, 1, 0, 0, 0, 208, 209, 1, 0, 0, 0, 209, 211, 1, 0, 0, 0, 210, 212, 5, 131, 0, 0, 211, 210, 1, 0, 0, 0, 211, 212, 1, 0, 0, 0, 212, 213, 1, 0, 0, 0, 213, 214, 5, 0, 0, 1, 214, 1, 1, 0, 0, 0, 215, 227, 3, 4, 2, 0, 216, 218, 5, 46, 0, 0, 217, 219, 5, 131, 0, 0, 218, 217, 1, 0, 0, 0, 218, 219, 1, 0, 0, 0, 219, 220, 1, 0, 0, 0, 220, 227, 3, 4, 2, 0, 221, 223, 5, 47, 0, 0, 222, 224, 5, 131, 0, 0, 223, 222, 1, 0, 0, 0, 223, 224, 1, 0, 0, 0, 224, 225, 1, 0, 0, 0, 225, 227, 3, 4, 2, 0, 226, 215, 1, 0, 0, 0, 226, 216, 1, 0, 0, 0, 226, 221, 1, 0, 0, 0, 227, 3, 1, 0, 0, 0, 228, 231, 3, 6, 3, 0, 229, 231, 3, 42, 21, 0, 230, 228, 1, 0, 0, 0, 230, 229, 1, 0, 0, 0, 231, 5, 1, 0, 0, 0, 232, 239, 3, 10, 5, 0, 233, 235, 5, 131, 0, 0, 234, 233, 1, 0, 0, 0, 234, 235, 1, 0, 0, 0, 235, 236, 1, 0, 0, 0, 236, 238, 3, 8, 4, 0, 237, 234, 1, 0, 0, 0, 238, 241, 1, 0, 0, 0, 239, 237, 1, 0, 0, 0, 239, 240, 1, 0, 0, 0, 240, 7, 1, 0, 0, 0, 241, 239, 1, 0, 0, 0, 242, 243, 5, 48, 0, 0, 243, 244, 5, 131, 0, 0, 244, 246, 5, 49, 0, 0, 245, 247, 5, 131, 0, 0, 246, 245, 1, 0, 0, 0, 246, 247, 1, 0, 0, 0, 247, 248, 1, 0, 0, 0, 248, 255, 3, 10, 5, 0, 249, 251, 5, 48, 0, 0, 250, 252, 5, 131, 0, 0, 251, 250, 1, 0, 0, 0, 251, 252, 1, 0, 0, 0, 252, 253, 1, 0, 0, 0, 253, 255, 3, 10, 5, 0, 254, 242, 1, 0, 0, 0, 254, 249, 1, 0, 0, 0, 255, 9, 1, 0, 0, 0, 256, 259, 3, 12, 6, 0, 257, 259, 3, 14, 7, 0, 258, 256, 1, 0, 0, 0, 258, 257, 1, 0, 0, 0, 259, 11, 1, 0, 0, 0, 260, 262, 3, 18, 9, 0, 261, 263, 5, 131, 0, 0, 262, 261, 1, 0, 0, 0, 262, 263, 1, 0, 0, 0, 263, 265, 1, 0, 0, 0, 264, 260, 1, 0, 0, 0, 265, 268, 1, 0, 0, 0, 266, 264, 1, 0, 0, 0, 266, 267, 1, 0, 0, 0, 267, 269, 1, 0, 0, 0, 268, 266, 1, 0, 0, 0, 269, 296, 3, 50, 25, 0, 270, 272, 3, 18, 9, 0, 271, 273, 5, 131, 0, 0, 272, 271, 1, 0, 0, 0, 272, 273, 1, 0, 0, 0, 273, 275, 1, 0, 0, 0, 274, 270, 1, 0, 0, 0, 275, 278, 1, 0, 0, 0, 276, 274, 1, 0, 0, 0, 276, 277, 1, 0, 0, 0, 277, 279, 1, 0, 0, 0, 278, 276, 1, 0, 0, 0, 279, 286, 3, 16, 8, 0, 280, 282, 5, 131, 0, 0, 281, 280, 1, 0, 0, 0, 281, 282, 1, 0, 0, 0, 282, 283, 1, 0, 0, 0, 283, 285, 3, 16, 8, 0, 284, 281, 1, 0, 0, 0, 285, 288, 1, 0, 0, 0, 286, 284, 1, 0, 0, 0, 286, 287, 1, 0, 0, 0, 287, 293, 1, 0, 0, 0, 288, 286, 1, 0, 0, 0, 289, 291, 5, 131, 0, 0, 290, 289, 1, 0, 0, 0, 290, 291, 1, 0, 0, 0, 291, 292, 1, 0, 0, 0, 292, 294, 3, 50, 25, 0, 293, 290, 1, 0, 0, 0, 293, 294, 1, 0, 0, 0, 294, 296, 1, 0, 0, 0, 295, 266, 1, 0, 0, 0, 295, 276, 1, 0, 0, 0, 296, 13, 1, 0, 0, 0, 297, 299, 3, 18, 9, 0, 298, 300, 5, 131, 0, 0, 299, 298, 1, 0, 0, 0, 299, 300, 1, 0, 0, 0, 300, 302, 1, 0, 0, 0, 301, 297, 1, 0, 0, 0, 302, 305, 1, 0, 0, 0, 303, 301, 1, 0, 0, 0, 303, 304, 1, 0, 0, 0, 304, 312, 1, 0, 0, 0, 305, 303, 1, 0, 0, 0, 306, 308, 3, 16, 8, 0, 307, 309, 5, 131, 0, 0, 308, 307, 1, 0, 0, 0, 308, 309, 1, 0, 0, 0, 309, 311, 1, 0, 0, 0, 310, 306, 1, 0, 0, 0, 311, 314, 1, 0, 0, 0, 312, 310, 1, 0, 0, 0, 312, 313, 1, 0, 0, 0, 313, 315, 1, 0, 0, 0, 314, 312, 1, 0, 0, 0, 315, 317, 3, 48, 24, 0, 316, 318, 5, 131, 0, 0, 317, 316, 1, 0, 0, 0, 317, 318, 1, 0, 0, 0, 318, 320, 1, 0, 0, 0, 319, 303, 1, 0, 0, 0, 320, 321, 1, 0, 0, 0, 321, 319, 1, 0, 0, 0, 321, 322, 1, 0, 0, 0, 322, 323, 1, 0, 0, 0, 323, 324, 3, 12, 6, 0, 324, 15, 1, 0, 0, 0, 325, 331, 3, 28, 14, 0, 326, 331, 3, 24, 12, 0, 327, 331, 3, 34, 17, 0, 328, 331, 3, 30, 15, 0, 329, 331, 3, 36, 18, 0, 330, 325, 1, 0, 0, 0, 330, 326, 1, 0, 0, 0, 330, 327, 1, 0, 0, 0, 330, 328, 1, 0, 0, 0, 330, 329, 1, 0, 0, 0, 331, 17, 1, 0, 0, 0, 332, 336, 3, 20, 10, 0, 333, 336, 3, 22, 11, 0, 334, 336, 3, 40, 20, 0, 335, 332, 1, 0, 0, 0, 335, 333, 1, 0, 0, 0, 335, 334, 1, 0, 0, 0, 336, 19, 1, 0, 0, 0, 337, 338, 5, 50, 0, 0, 338, 340, 5, 131, 0, 0, 339, 337, 1, 0, 0, 0, 339, 340, 1, 0, 0, 0, 340, 341, 1, 0, 0, 0, 341, 343, 5, 51, 0, 0, 342, 344, 5, 131, 0, 0, 343, 342, 1, 0, 0, 0, 343, 344, 1, 0, 0, 0, 344, 345, 1, 0, 0, 0, 345, 352, 3, 70, 35, 0, 346, 348, 5, 131, 0, 0, 347, 346, 1, 0, 0, 0, 347, 348, 1, 0, 0, 0, 348, 349, 1, 0, 0, 0, 349, 351, 3, 66, 33, 0, 350, 347, 1, 0, 0, 0, 351, 354, 1, 0, 0, 0, 352, 350, 1, 0, 0, 0, 352, 353, 1, 0, 0, 0, 353, 359, 1, 0, 0, 0, 354, 352, 1, 0, 0, 0, 355, 357, 5, 131, 0, 0, 356, 355, 1, 0, 0, 0, 356, 357, 1, 0, 0, 0, 357, 358, 1, 0, 0, 0, 358, 360, 3, 68, 34, 0, 359, 356, 1, 0, 0, 0, 359, 360, 1, 0, 0, 0, 360, 21, 1, 0, 0, 0, 361, 363, 5, 52, 0, 0, 362, 364, 5, 131, 0, 0, 363, 362, 1, 0, 0, 0, 363, 364, 1, 0, 0, 0, 364, 365, 1, 0, 0, 0, 365, 366, 3, 100, 50, 0, 366, 367, 5, 131, 0, 0, 367, 368, 5, 53, 0, 0, 368, 369, 5, 131, 0, 0, 369, 370, 3, 172, 86, 0, 370, 23, 1, 0, 0, 0, 371, 373, 5, 54, 0, 0, 372, 374, 5, 131, 0, 0, 373, 372, 1, 0, 0, 0, 373, 374, 1, 0, 0, 0, 374, 375, 1, 0, 0, 0, 375, 380, 3, 72, 36, 0, 376, 377, 5, 131, 0, 0, 377, 379, 3, 26, 13, 0, 378, 376, 1, 0, 0, 0, 379, 382, 1, 0, 0, 0, 380, 378, 1, 0, 0, 0, 380, 381, 1, 0, 0, 0, 381, 25, 1, 0, 0, 0, 382, 380, 1, 0, 0, 0, 383, 384, 5, 55, 0, 0, 384, 385, 5, 131, 0, 0, 385, 386, 5, 51, 0, 0, 386, 387, 5, 131, 0, 0, 387, 394, 3, 30, 15, 0, 388, 389, 5, 55, 0, 0, 389, 390, 5, 131, 0, 0, 390, 391, 5, 56, 0, 0, 391, 392, 5, 131, 0, 0, 392, 394, 3, 30, 15, 0, 393, 383, 1, 0, 0, 0, 393, 388, 1, 0, 0, 0, 394, 27, 1, 0, 0, 0, 395, 397, 5, 56, 0, 0, 396, 398, 5, 131, 0, 0, 397, 396, 1, 0, 0, 0, 397, 398, 1, 0, 0, 0, 398, 399, 1, 0, 0, 0, 399, 400, 3, 70, 35, 0, 400, 29, 1, 0, 0, 0, 401, 403, 5, 57, 0, 0, 402, 404, 5, 131, 0, 0, 403, 402, 1, 0, 0, 0, 403, 404, 1, 0, 0, 0, 404, 405, 1, 0, 0, 0, 405, 416, 3, 32, 16, 0, 406, 408, 5, 131, 0, 0, 407, 406, 1, 0, 0, 0, 407, 408, 1, 0, 0, 0, 408, 409, 1, 0, 0, 0, 409, 411, 5, 2, 0, 0, 410, 412, 5, 131, 0, 0, 411, 410, 1, 0, 0, 0, 411, 412, 1, 0, 0, 0, 412, 413, 1, 0, 0, 0, 413, 415, 3, 32, 16, 0, 414, 407, 1, 0, 0, 0, 415, 418, 1, 0, 0, 0, 416, 414, 1, 0, 0, 0, 416, 417, 1, 0, 0, 0, 417, 31, 1, 0, 0, 0, 418, 416, 1, 0, 0, 0, 419, 421, 3, 180, 90, 0, 420, 422, 5, 131, 0, 0, 421, 420, 1, 0, 0, 0, 421, 422, 1, 0, 0, 0, 422, 423, 1, 0, 0, 0, 423, 425, 5, 3, 0, 0, 424, 426, 5, 131, 0, 0, 425, 424, 1, 0, 0, 0, 425, 426, 1, 0, 0, 0, 426, 427, 1, 0, 0, 0, 427, 428, 3, 100, 50, 0, 428, 456, 1, 0, 0, 0, 429, 431, 3, 172, 86, 0, 430, 432, 5, 131, 0, 0, 431, 430, 1, 0, 0, 0, 431, 432, 1, 0, 0, 0, 432, 433, 1, 0, 0, 0, 433, 435, 5, 3, 0, 0, 434, 436, 5, 131, 0, 0, 435, 434, 1, 0, 0, 0, 435, 436, 1, 0, 0, 0, 436, 437, 1, 0, 0, 0, 437, 438, 3, 100, 50, 0, 438, 456, 1, 0, 0, 0, 439, 441, 3, 172, 86, 0, 440, 442, 5, 131, 0, 0, 441, 440, 1, 0, 0, 0, 441, 442, 1, 0, 0, 0, 442, 443, 1, 0, 0, 0, 443, 445, 5, 4, 0, 0, 444, 446, 5, 131, 0, 0, 445, 444, 1, 0, 0, 0, 445, 446, 1, 0, 0, 0, 446, 447, 1, 0, 0, 0, 447, 448, 3, 100, 50, 0, 448, 456, 1, 0, 0, 0, 449, 451, 3, 172, 86, 0, 450, 452, 5, 131, 0, 0, 451, 450, 1, 0, 0, 0, 451, 452, 1, 0, 0, 0, 452, 453, 1, 0, 0, 0, 453, 454, 3, 90, 45, 0, 454, 456, 1, 0, 0, 0, 455, 419, 1, 0, 0, 0, 455, 429, 1, 0, 0, 0, 455, 439, 1, 0, 0, 0, 455, 449, 1, 0, 0, 0, 456, 33, 1, 0, 0, 0, 457, 458, 5, 58, 0, 0, 458, 460, 5, 131, 0, 0, 459, 457, 1, 0, 0, 0, 459, 460, 1, 0, 0, 0, 460, 461, 1, 0, 0, 0, 461, 463, 5, 59, 0, 0, 462, 464, 5, 131, 0, 0, 463, 462, 1, 0, 0, 0, 463, 464, 1, 0, 0, 0, 464, 465, 1, 0, 0, 0, 465, 476, 3, 100, 50, 0, 466, 468, 5, 131, 0, 0, 467, 466, 1, 0, 0, 0, 467, 468, 1, 0, 0, 0, 468, 469, 1, 0, 0, 0, 469, 471, 5, 2, 0, 0, 470, 472, 5, 131, 0, 0, 471, 470, 1, 0, 0, 0, 471, 472, 1, 0, 0, 0, 472, 473, 1, 0, 0, 0, 473, 475, 3, 100, 50, 0, 474, 467, 1, 0, 0, 0, 475, 478, 1, 0, 0, 0, 476, 474, 1, 0, 0, 0, 476, 477, 1, 0, 0, 0, 477, 35, 1, 0, 0, 0, 478, 476, 1, 0, 0, 0, 479, 480, 5, 60, 0, 0, 480, 481, 5, 131, 0, 0, 481, 492, 3, 38, 19, 0, 482, 484, 5, 131, 0, 0, 483, 482, 1, 0, 0, 0, 483, 484, 1, 0, 0, 0, 484, 485, 1, 0, 0, 0, 485, 487, 5, 2, 0, 0, 486, 488, 5, 131, 0, 0, 487, 486, 1, 0, 0, 0, 487, 488, 1, 0, 0, 0, 488, 489, 1, 0, 0, 0, 489, 491, 3, 38, 19, 0, 490, 483, 1, 0, 0, 0, 491, 494, 1, 0, 0, 0, 492, 490, 1, 0, 0, 0, 492, 493, 1, 0, 0, 0, 493, 37, 1, 0, 0, 0, 494, 492, 1, 0, 0, 0, 495, 496, 3, 172, 86, 0, 496, 497, 3, 90, 45, 0, 497, 500, 1, 0, 0, 0, 498, 500, 3, 180, 90, 0, 499, 495, 1, 0, 0, 0, 499, 498, 1, 0, 0, 0, 500, 39, 1, 0, 0, 0, 501, 502, 5, 61, 0, 0, 502, 503, 5, 131, 0, 0, 503, 510, 3, 152, 76, 0, 504, 506, 5, 131, 0, 0, 505, 504, 1, 0, 0, 0, 505, 506, 1, 0, 0, 0, 506, 507, 1, 0, 0, 0, 507, 508, 5, 62, 0, 0, 508, 509, 5, 131, 0, 0, 509, 511, 3, 44, 22, 0, 510, 505, 1, 0, 0, 0, 510, 511, 1, 0, 0, 0, 511, 41, 1, 0, 0, 0, 512, 513, 5, 61, 0, 0, 513, 516, 5, 131, 0, 0, 514, 517, 3, 152, 76, 0, 515, 517, 3, 154, 77, 0, 516, 514, 1, 0, 0, 0, 516, 515, 1, 0, 0, 0, 517, 522, 1, 0, 0, 0, 518, 519, 5, 131, 0, 0, 519, 520, 5, 62, 0, 0, 520, 521, 5, 131, 0, 0, 521, 523, 3, 44, 22, 0, 522, 518, 1, 0, 0, 0, 522, 523, 1, 0, 0, 0, 523, 43, 1, 0, 0, 0, 524, 540, 5, 5, 0, 0, 525, 536, 3, 46, 23, 0, 526, 528, 5, 131, 0, 0, 527, 526, 1, 0, 0, 0, 527, 528, 1, 0, 0, 0, 528, 529, 1, 0, 0, 0, 529, 531, 5, 2, 0, 0, 530, 532, 5, 131, 0, 0, 531, 530, 1, 0, 0, 0, 531, 532, 1, 0, 0, 0, 532, 533, 1, 0, 0, 0, 533, 535, 3, 46, 23, 0, 534, 527, 1, 0, 0, 0, 535, 538, 1, 0, 0, 0, 536, 534, 1, 0, 0, 0, 536, 537, 1, 0, 0, 0, 537, 540, 1, 0, 0, 0, 538, 536, 1, 0, 0, 0, 539, 524, 1, 0, 0, 0, 539, 525, 1, 0, 0, 0, 540, 545, 1, 0, 0, 0, 541, 543, 5, 131, 0, 0, 542, 541, 1, 0, 0, 0, 542, 543, 1, 0, 0, 0, 543, 544, 1, 0, 0, 0, 544, 546, 3, 68, 34, 0, 545, 542, 1, 0, 0, 0, 545, 546, 1, 0, 0, 0, 546, 45, 1, 0, 0, 0, 547, 548, 3, 156, 78, 0, 548, 549, 5, 131, 0, 0, 549, 550, 5, 53, 0, 0, 550, 551, 5, 131, 0, 0, 551, 553, 1, 0, 0, 0, 552, 547, 1, 0, 0, 0, 552, 553, 1, 0, 0, 0, 553, 554, 1, 0, 0, 0, 554, 555, 3, 172, 86, 0, 555, 47, 1, 0, 0, 0, 556, 561, 5, 63, 0, 0, 557, 559, 5, 131, 0, 0, 558, 557, 1, 0, 0, 0, 558, 559, 1, 0, 0, 0, 559, 560, 1, 0, 0, 0, 560, 562, 5, 64, 0, 0, 561, 558, 1, 0, 0, 0, 561, 562, 1, 0, 0, 0, 562, 563, 1, 0, 0, 0, 563, 564, 5, 131, 0, 0, 564, 569, 3, 52, 26, 0, 565, 567, 5, 131, 0, 0, 566, 565, 1, 0, 0, 0, 566, 567, 1, 0, 0, 0, 567, 568, 1, 0, 0, 0, 568, 570, 3, 68, 34, 0, 569, 566, 1, 0, 0, 0, 569, 570, 1, 0, 0, 0, 570, 49, 1, 0, 0, 0, 571, 576, 5, 65, 0, 0, 572, 574, 5, 131, 0, 0, 573, 572, 1, 0, 0, 0, 573, 574, 1, 0, 0, 0, 574, 575, 1, 0, 0, 0, 575, 577, 5, 64, 0, 0, 576, 573, 1, 0, 0, 0, 576, 577, 1, 0, 0, 0, 577, 578, 1, 0, 0, 0, 578, 579, 5, 131, 0, 0, 579, 580, 3, 52, 26, 0, 580, 51, 1, 0, 0, 0, 581, 584, 3, 54, 27, 0, 582, 583, 5, 131, 0, 0, 583, 585, 3, 58, 29, 0, 584, 582, 1, 0, 0, 0, 584, 585, 1, 0, 0, 0, 585, 588, 1, 0, 0, 0, 586, 587, 5, 131, 0, 0, 587, 589, 3, 60, 30, 0, 588, 586, 1, 0, 0, 0, 588, 589, 1, 0, 0, 0, 589, 592, 1, 0, 0, 0, 590, 591, 5, 131, 0, 0, 591, 593, 3, 62, 31, 0, 592, 590, 1, 0, 0, 0, 592, 593, 1, 0, 0, 0, 593, 53, 1, 0, 0, 0, 594, 605, 5, 5, 0, 0, 595, 597, 5, 131, 0, 0, 596, 595, 1, 0, 0, 0, 596, 597, 1, 0, 0, 0, 597, 598, 1, 0, 0, 0, 598, 600, 5, 2, 0, 0, 599, 601, 5, 131, 0, 0, 600, 599, 1, 0, 0, 0, 600, 601, 1, 0, 0, 0, 601, 602, 1, 0, 0, 0, 602, 604, 3, 56, 28, 0, 603, 596, 1, 0, 0, 0, 604, 607, 1, 0, 0, 0, 605, 603, 1, 0, 0, 0, 605, 606, 1, 0, 0, 0, 606, 623, 1, 0, 0, 0, 607, 605, 1, 0, 0, 0, 608, 619, 3, 56, 28, 0, 609, 611, 5, 131, 0, 0, 610, 609, 1, 0, 0, 0, 610, 611, 1, 0, 0, 0, 611, 612, 1, 0, 0, 0, 612, 614, 5, 2, 0, 0, 613, 615, 5, 131, 0, 0, 614, 613, 1, 0, 0, 0, 614, 615, 1, 0, 0, 0, 615, 616, 1, 0, 0, 0, 616, 618, 3, 56, 28, 0, 617, 610, 1, 0, 0, 0, 618, 621, 1, 0, 0, 0, 619, 617, 1, 0, 0, 0, 619, 620, 1, 0, 0, 0, 620, 623, 1, 0, 0, 0, 621, 619, 1, 0, 0, 0, 622, 594, 1, 0, 0, 0, 622, 608, 1, 0, 0, 0, 623, 55, 1, 0, 0, 0, 624, 625, 3, 100, 50, 0, 625, 626, 5, 131, 0, 0, 626, 627, 5, 53, 0, 0, 627, 628, 5, 131, 0, 0, 628, 629, 3, 172, 86, 0, 629, 632, 1, 0, 0, 0, 630, 632, 3, 100, 50, 0, 631, 624, 1, 0, 0, 0, 631, 630, 1, 0, 0, 0, 632, 57, 1, 0, 0, 0, 633, 634, 5, 66, 0, 0, 634, 635, 5, 131, 0, 0, 635, 636, 5, 67, 0, 0, 636, 637, 5, 131, 0, 0, 637, 645, 3, 64, 32, 0, 638, 640, 5, 2, 0, 0, 639, 641, 5, 131, 0, 0, 640, 639, 1, 0, 0, 0, 640, 641, 1, 0, 0, 0, 641, 642, 1, 0, 0, 0, 642, 644, 3, 64, 32, 0, 643, 638, 1, 0, 0, 0, 644, 647, 1, 0, 0, 0, 645, 643, 1, 0, 0, 0, 645, 646, 1, 0, 0, 0, 646, 59, 1, 0, 0, 0, 647, 645, 1, 0, 0, 0, 648, 649, 5, 68, 0, 0, 649, 650, 5, 131, 0, 0, 650, 651, 3, 100, 50, 0, 651, 61, 1, 0, 0, 0, 652, 653, 5, 69, 0, 0, 653, 654, 5, 131, 0, 0, 654, 655, 3, 100, 50, 0, 655, 63, 1, 0, 0, 0, 656, 661, 3, 100, 50, 0, 657, 659, 5, 131, 0, 0, 658, 657, 1, 0, 0, 0, 658, 659, 1, 0, 0, 0, 659, 660, 1, 0, 0, 0, 660, 662, 7, 0, 0, 0, 661, 658, 1, 0, 0, 0, 661, 662, 1, 0, 0, 0, 662, 65, 1, 0, 0, 0, 663, 664, 5, 74, 0, 0, 664, 665, 5, 131, 0, 0, 665, 666, 5, 75, 0, 0, 666, 667, 5, 131, 0, 0, 667, 668, 5, 55, 0, 0, 668, 669, 5, 131, 0, 0, 669, 678, 3, 172, 86, 0, 670, 671, 5, 74, 0, 0, 671, 672, 5, 131, 0, 0, 672, 673, 5, 76, 0, 0, 673, 674, 5, 131, 0, 0, 674, 675, 5, 55, 0, 0, 675, 676, 5, 131, 0, 0, 676, 678, 3, 172, 86, 0, 677, 663, 1, 0, 0, 0, 677, 670, 1, 0, 0, 0, 678, 67, 1, 0, 0, 0, 679, 680, 5, 77, 0, 0, 680, 681, 5, 131, 0, 0, 681, 682, 3, 100, 50, 0, 682, 69, 1, 0, 0, 0, 683, 694, 3, 72, 36, 0, 684, 686, 5, 131, 0, 0, 685, 684, 1, 0, 0, 0, 685, 686, 1, 0, 0, 0, 686, 687, 1, 0, 0, 0, 687, 689, 5, 2, 0, 0, 688, 690, 5, 131, 0, 0, 689, 688, 1, 0, 0, 0, 689, 690, 1, 0, 0, 0, 690, 691, 1, 0, 0, 0, 691, 693, 3, 72, 36, 0, 692, 685, 1, 0, 0, 0, 693, 696, 1, 0, 0, 0, 694, 692, 1, 0, 0, 0, 694, 695, 1, 0, 0, 0, 695, 71, 1, 0, 0, 0, 696, 694, 1, 0, 0, 0, 697, 699, 3, 172, 86, 0, 698, 700, 5, 131, 0, 0, 699, 698, 1, 0, 0, 0, 699, 700, 1, 0, 0, 0, 700, 701, 1, 0, 0, 0, 701, 703, 5, 3, 0, 0, 702, 704, 5, 131, 0, 0, 703, 702, 1, 0, 0, 0, 703, 704, 1, 0, 0, 0, 704, 705, 1, 0, 0, 0, 705, 706, 3, 74, 37, 0, 706, 709, 1, 0, 0, 0, 707, 709, 3, 74, 37, 0, 708, 697, 1, 0, 0, 0, 708, 707, 1, 0, 0, 0, 709, 73, 1, 0, 0, 0, 710, 711, 3, 76, 38, 0, 711, 75, 1, 0, 0, 0, 712, 719, 3, 78, 39, 0, 713, 715, 5, 131, 0, 0, 714, 713, 1, 0, 0, 0, 714, 715, 1, 0, 0, 0, 715, 716, 1, 0, 0, 0, 716, 718, 3, 80, 40, 0, 717, 714, 1, 0, 0, 0, 718, 721, 1, 0, 0, 0, 719, 717, 1, 0, 0, 0, 719, 720, 1, 0, 0, 0, 720, 727, 1, 0, 0, 0, 721, 719, 1, 0, 0, 0, 722, 723, 5, 6, 0, 0, 723, 724, 3, 76, 38, 0, 724, 725, 5, 7, 0, 0, 725, 727, 1, 0, 0, 0, 726, 712, 1, 0, 0, 0, 726, 722, 1, 0, 0, 0, 727, 77, 1, 0, 0, 0, 728, 730, 5, 6, 0, 0, 729, 731, 5, 131, 0, 0, 730, 729, 1, 0, 0, 0, 730, 731, 1, 0, 0, 0, 731, 736, 1, 0, 0, 0, 732, 734, 3, 172, 86, 0, 733, 735, 5, 131, 0, 0, 734, 733, 1, 0, 0, 0, 734, 735, 1, 0, 0, 0, 735, 737, 1, 0, 0, 0, 736, 732, 1, 0, 0, 0, 736, 737, 1, 0, 0, 0, 737, 742, 1, 0, 0, 0, 738, 740, 3, 90, 45, 0, 739, 741, 5, 131, 0, 0, 740, 739, 1, 0, 0, 0, 740, 741, 1, 0, 0, 0, 741, 743, 1, 0, 0, 0, 742, 738, 1, 0, 0, 0, 742, 743, 1, 0, 0, 0, 743, 748, 1, 0, 0, 0, 744, 746, 3, 86, 43, 0, 745, 747, 5, 131, 0, 0, 746, 745, 1, 0, 0, 0, 746, 747, 1, 0, 0, 0, 747, 749, 1, 0, 0, 0, 748, 744, 1, 0, 0, 0, 748, 749, 1, 0, 0, 0, 749, 750, 1, 0, 0, 0, 750, 751, 5, 7, 0, 0, 751, 79, 1, 0, 0, 0, 752, 754, 3, 82, 41, 0, 753, 755, 5, 131, 0, 0, 754, 753, 1, 0, 0, 0, 754, 755, 1, 0, 0, 0, 755, 756, 1, 0, 0, 0, 756, 757, 3, 78, 39, 0, 757, 81, 1, 0, 0, 0, 758, 760, 3, 194, 97, 0, 759, 761, 5, 131, 0, 0, 760, 759, 1, 0, 0, 0, 760, 761, 1, 0, 0, 0, 761, 762, 1, 0, 0, 0, 762, 764, 3, 198, 99, 0, 763, 765, 5, 131, 0, 0, 764, 763, 1, 0, 0, 0, 764, 765, 1, 0, 0, 0, 765, 767, 1, 0, 0, 0, 766, 768, 3, 84, 42, 0, 767, 766, 1, 0, 0, 0, 767, 768, 1, 0, 0, 0, 768, 770, 1, 0, 0, 0, 769, 771, 5, 131, 0, 0, 770, 769, 1, 0, 0, 0, 770, 771, 1, 0, 0, 0, 771, 772, 1, 0, 0, 0, 772, 774, 3, 198, 99, 0, 773, 775, 5, 131, 0, 0, 774, 773, 1, 0, 0, 0, 774, 775, 1, 0, 0, 0, 775, 776, 1, 0, 0, 0, 776, 777, 3, 196, 98, 0, 777, 823, 1, 0, 0, 0, 778, 780, 3, 194, 97, 0, 779, 781, 5, 131, 0, 0, 780, 779, 1, 0, 0, 0, 780, 781, 1, 0, 0, 0, 781, 782, 1, 0, 0, 0, 782, 784, 3, 198, 99, 0, 783, 785, 5, 131, 0, 0, 784, 783, 1, 0, 0, 0, 784, 785, 1, 0, 0, 0, 785, 787, 1, 0, 0, 0, 786, 788, 3, 84, 42, 0, 787, 786, 1, 0, 0, 0, 787, 788, 1, 0, 0, 0, 788, 790, 1, 0, 0, 0, 789, 791, 5, 131, 0, 0, 790, 789, 1, 0, 0, 0, 790, 791, 1, 0, 0, 0, 791, 792, 1, 0, 0, 0, 792, 793, 3, 198, 99, 0, 793, 823, 1, 0, 0, 0, 794, 796, 3, 198, 99, 0, 795, 797, 5, 131, 0, 0, 796, 795, 1, 0, 0, 0, 796, 797, 1, 0, 0, 0, 797, 799, 1, 0, 0, 0, 798, 800, 3, 84, 42, 0, 799, 798, 1, 0, 0, 0, 799, 800, 1, 0, 0, 0, 800, 802, 1, 0, 0, 0, 801, 803, 5, 131, 0, 0, 802, 801, 1, 0, 0, 0, 802, 803, 1, 0, 0, 0, 803, 804, 1, 0, 0, 0, 804, 806, 3, 198, 99, 0, 805, 807, 5, 131, 0, 0, 806, 805, 1, 0, 0, 0, 806, 807, 1, 0, 0, 0, 807, 808, 1, 0, 0, 0, 808, 809, 3, 196, 98, 0, 809, 823, 1, 0, 0, 0, 810, 812, 3, 198, 99, 0, 811, 813, 5, 131, 0, 0, 812, 811, 1, 0, 0, 0, 812, 813, 1, 0, 0, 0, 813, 815, 1, 0, 0, 0, 814, 816, 3, 84, 42, 0, 815, 814, 1, 0, 0, 0, 815, 816, 1, 0, 0, 0, 816, 818, 1, 0, 0, 0, 817, 819, 5, 131, 0, 0, 818, 817, 1, 0, 0, 0, 818, 819, 1, 0, 0, 0, 819, 820, 1, 0, 0, 0, 820, 821, 3, 198, 99, 0, 821, 823, 1, 0, 0, 0, 822, 758, 1, 0, 0, 0, 822, 778, 1, 0, 0, 0, 822, 794, 1, 0, 0, 0, 822, 810, 1, 0, 0, 0, 823, 83, 1, 0, 0, 0, 824, 826, 5, 8, 0, 0, 825, 827, 5, 131, 0, 0, 826, 825, 1, 0, 0, 0, 826, 827, 1, 0, 0, 0, 827, 832, 1, 0, 0, 0, 828, 830, 3, 172, 86, 0, 829, 831, 5, 131, 0, 0, 830, 829, 1, 0, 0, 0, 830, 831, 1, 0, 0, 0, 831, 833, 1, 0, 0, 0, 832, 828, 1, 0, 0, 0, 832, 833, 1, 0, 0, 0, 833, 838, 1, 0, 0, 0, 834, 836, 3, 88, 44, 0, 835, 837, 5, 131, 0, 0, 836, 835, 1, 0, 0, 0, 836, 837, 1, 0, 0, 0, 837, 839, 1, 0, 0, 0, 838, 834, 1, 0, 0, 0, 838, 839, 1, 0, 0, 0, 839, 841, 1, 0, 0, 0, 840, 842, 3, 94, 47, 0, 841, 840, 1, 0, 0, 0, 841, 842, 1, 0, 0, 0, 842, 847, 1, 0, 0, 0, 843, 845, 3, 86, 43, 0, 844, 846, 5, 131, 0, 0, 845, 844, 1, 0, 0, 0, 845, 846, 1, 0, 0, 0, 846, 848, 1, 0, 0, 0, 847, 843, 1, 0, 0, 0, 847, 848, 1, 0, 0, 0, 848, 849, 1, 0, 0, 0, 849, 850, 5, 9, 0, 0, 850, 85, 1, 0, 0, 0, 851, 854, 3, 176, 88, 0, 852, 854, 3, 178, 89, 0, 853, 851, 1, 0, 0, 0, 853, 852, 1, 0, 0, 0, 854, 87, 1, 0, 0, 0, 855, 857, 5, 10, 0, 0, 856, 858, 5, 131, 0, 0, 857, 856, 1, 0, 0, 0, 857, 858, 1, 0, 0, 0, 858, 859, 1, 0, 0, 0, 859, 873, 3, 98, 49, 0, 860, 862, 5, 131, 0, 0, 861, 860, 1, 0, 0, 0, 861, 862, 1, 0, 0, 0, 862, 863, 1, 0, 0, 0, 863, 865, 5, 11, 0, 0, 864, 866, 5, 10, 0, 0, 865, 864, 1, 0, 0, 0, 865, 866, 1, 0, 0, 0, 866, 868, 1, 0, 0, 0, 867, 869, 5, 131, 0, 0, 868, 867, 1, 0, 0, 0, 868, 869, 1, 0, 0, 0, 869, 870, 1, 0, 0, 0, 870, 872, 3, 98, 49, 0, 871, 861, 1, 0, 0, 0, 872, 875, 1, 0, 0, 0, 873, 871, 1, 0, 0, 0, 873, 874, 1, 0, 0, 0, 874, 89, 1, 0, 0, 0, 875, 873, 1, 0, 0, 0, 876, 883, 3, 92, 46, 0, 877, 879, 5, 131, 0, 0, 878, 877, 1, 0, 0, 0, 878, 879, 1, 0, 0, 0, 879, 880, 1, 0, 0, 0, 880, 882, 3, 92, 46, 0, 881, 878, 1, 0, 0, 0, 882, 885, 1, 0, 0, 0, 883, 881, 1, 0, 0, 0, 883, 884, 1, 0, 0, 0, 884, 91, 1, 0, 0, 0, 885, 883, 1, 0, 0, 0, 886, 888, 5, 10, 0, 0, 887, 889, 5, 131, 0, 0, 888, 887, 1, 0, 0, 0, 888, 889, 1, 0, 0, 0, 889, 890, 1, 0, 0, 0, 890, 891, 3, 96, 48, 0, 891, 93, 1, 0, 0, 0, 892, 894, 5, 5, 0, 0, 893, 895, 5, 131, 0, 0, 894, 893, 1, 0, 0, 0, 894, 895, 1, 0, 0, 0, 895, 900, 1, 0, 0, 0, 896, 898, 3, 184, 92, 0, 897, 899, 5, 131, 0, 0, 898, 897, 1, 0, 0, 0, 898, 899, 1, 0, 0, 0, 899, 901, 1, 0, 0, 0, 900, 896, 1, 0, 0, 0, 900, 901, 1, 0, 0, 0, 901, 912, 1, 0, 0, 0, 902, 904, 5, 12, 0, 0, 903, 905, 5, 131, 0, 0, 904, 903, 1, 0, 0, 0, 904, 905, 1, 0, 0, 0, 905, 910, 1, 0, 0, 0, 906, 908, 3, 184, 92, 0, 907, 909, 5, 131, 0, 0, 908, 907, 1, 0, 0, 0, 908, 909, 1, 0, 0, 0, 909, 911, 1, 0, 0, 0, 910, 906, 1, 0, 0, 0, 910, 911, 1, 0, 0, 0, 911, 913, 1, 0, 0, 0, 912, 902, 1, 0, 0, 0, 912, 913, 1, 0, 0, 0, 913, 95, 1, 0, 0, 0, 914, 915, 3, 188, 94, 0, 915, 97, 1, 0, 0, 0, 916, 917, 3, 188, 94, 0, 917, 99, 1, 0, 0, 0, 918, 919, 3, 102, 51, 0, 919, 101, 1, 0, 0, 0, 920, 927, 3, 104, 52, 0, 921, 922, 5, 131, 0, 0, 922, 923, 5, 78, 0, 0, 923, 924, 5, 131, 0, 0, 924, 926, 3, 104, 52, 0, 925, 921, 1, 0, 0, 0, 926, 929, 1, 0, 0, 0, 927, 925, 1, 0, 0, 0, 927, 928, 1, 0, 0, 0, 928, 103, 1, 0, 0, 0, 929, 927, 1, 0, 0, 0, 930, 937, 3, 106, 53, 0, 931, 932, 5, 131, 0, 0, 932, 933, 5, 79, 0, 0, 933, 934, 5, 131, 0, 0, 934, 936, 3, 106, 53, 0, 935, 931, 1, 0, 0, 0, 936, 939, 1, 0, 0, 0, 937, 935, 1, 0, 0, 0, 937, 938, 1, 0, 0, 0, 938, 105, 1, 0, 0, 0, 939, 937, 1, 0, 0, 0, 940, 947, 3, 108, 54, 0, 941, 942, 5, 131, 0, 0, 942, 943, 5, 80, 0, 0, 943, 944, 5, 131, 0, 0, 944, 946, 3, 108, 54, 0, 945, 941, 1, 0, 0, 0, 946, 949, 1, 0, 0, 0, 947, 945, 1, 0, 0, 0, 947, 948, 1, 0, 0, 0, 948, 107, 1, 0, 0, 0, 949, 947, 1, 0, 0, 0, 950, 952, 5, 81, 0, 0, 951, 953, 5, 131, 0, 0, 952, 951, 1, 0, 0, 0, 952, 953, 1, 0, 0, 0, 953, 955, 1, 0, 0, 0, 954, 950, 1, 0, 0, 0, 955, 958, 1, 0, 0, 0, 956, 954, 1, 0, 0, 0, 956, 957, 1, 0, 0, 0, 957, 959, 1, 0, 0, 0, 958, 956, 1, 0, 0, 0, 959, 960, 3, 110, 55, 0, 960, 109, 1, 0, 0, 0, 961, 968, 3, 112, 56, 0, 962, 964, 5, 131, 0, 0, 963, 962, 1, 0, 0, 0, 963, 964, 1, 0, 0, 0, 964, 965, 1, 0, 0, 0, 965, 967, 3, 138, 69, 0, 966, 963, 1, 0, 0, 0, 967, 970, 1, 0, 0, 0, 968, 966, 1, 0, 0, 0, 968, 969, 1, 0, 0, 0, 969, 111, 1, 0, 0, 0, 970, 968, 1, 0, 0, 0, 971, 990, 3, 114, 57, 0, 972, 974, 5, 131, 0, 0, 973, 972, 1, 0, 0, 0, 973, 974, 1, 0, 0, 0, 974, 975, 1, 0, 0, 0, 975, 977, 5, 13, 0, 0, 976, 978, 5, 131, 0, 0, 977, 976, 1, 0, 0, 0, 977, 978, 1, 0, 0, 0, 978, 979, 1, 0, 0, 0, 979, 989, 3, 114, 57, 0, 980, 982, 5, 131, 0, 0, 981, 980, 1, 0, 0, 0, 981, 982, 1, 0, 0, 0, 982, 983, 1, 0, 0, 0, 983, 985, 5, 14, 0, 0, 984, 986, 5, 131, 0, 0, 985, 984, 1, 0, 0, 0, 985, 986, 1, 0, 0, 0, 986, 987, 1, 0, 0, 0, 987, 989, 3, 114, 57, 0, 988, 973, 1, 0, 0, 0, 988, 981, 1, 0, 0, 0, 989, 992, 1, 0, 0, 0, 990, 988, 1, 0, 0, 0, 990, 991, 1, 0, 0, 0, 991, 113, 1, 0, 0, 0, 992, 990, 1, 0, 0, 0, 993, 1020, 3, 116, 58, 0, 994, 996, 5, 131, 0, 0, 995, 994, 1, 0, 0, 0, 995, 996, 1, 0, 0, 0, 996, 997, 1, 0, 0, 0, 997, 999, 5, 5, 0, 0, 998, 1000, 5, 131, 0, 0, 999, 998, 1, 0, 0, 0, 999, 1000, 1, 0, 0, 0, 1000, 1001, 1, 0, 0, 0, 1001, 1019, 3, 116, 58, 0, 1002, 1004, 5, 131, 0, 0, 1003, 1002, 1, 0, 0, 0, 1003, 1004, 1, 0, 0, 0, 1004, 1005, 1, 0, 0, 0, 1005, 1007, 5, 15, 0, 0, 1006, 1008, 5, 131, 0, 0, 1007, 1006, 1, 0, 0, 0, 1007, 1008, 1, 0, 0, 0, 1008, 1009, 1, 0, 0, 0, 1009, 1019, 3, 116, 58, 0, 1010, 1012, 5, 131, 0, 0, 1011, 1010, 1, 0, 0, 0, 1011, 1012, 1, 0, 0, 0, 1012, 1013, 1, 0, 0, 0, 1013, 1015, 5, 16, 0, 0, 1014, 1016, 5, 131, 0, 0, 1015, 1014, 1, 0, 0, 0, 1015, 1016, 1, 0, 0, 0, 1016, 1017, 1, 0, 0, 0, 1017, 1019, 3, 116, 58, 0, 1018, 995, 1, 0, 0, 0, 1018, 1003, 1, 0, 0, 0, 1018, 1011, 1, 0, 0, 0, 1019, 1022, 1, 0, 0, 0, 1020, 1018, 1, 0, 0, 0, 1020, 1021, 1, 0, 0, 0, 1021, 115, 1, 0, 0, 0, 1022, 1020, 1, 0, 0, 0, 1023, 1034, 3, 118, 59, 0, 1024, 1026, 5, 131, 0, 0, 1025, 1024, 1, 0, 0, 0, 1025, 1026, 1, 0, 0, 0, 1026, 1027, 1, 0, 0, 0, 1027, 1029, 5, 17, 0, 0, 1028, 1030, 5, 131, 0, 0, 1029, 1028, 1, 0, 0, 0, 1029, 1030, 1, 0, 0, 0, 1030, 1031, 1, 0, 0, 0, 1031, 1033, 3, 118, 59, 0, 1032, 1025, 1, 0, 0, 0, 1033, 1036, 1, 0, 0, 0, 1034, 1032, 1, 0, 0, 0, 1034, 1035, 1, 0, 0, 0, 1035, 117, 1, 0, 0, 0, 1036, 1034, 1, 0, 0, 0, 1037, 1039, 7, 1, 0, 0, 1038, 1040, 5, 131, 0, 0, 1039, 1038, 1, 0, 0, 0, 1039, 1040, 1, 0, 0, 0, 1040, 1042, 1, 0, 0, 0, 1041, 1037, 1, 0, 0, 0, 1042, 1045, 1, 0, 0, 0, 1043, 1041, 1, 0, 0, 0, 1043, 1044, 1, 0, 0, 0, 1044, 1046, 1, 0, 0, 0, 1045, 1043, 1, 0, 0, 0, 1046, 1047, 3, 120, 60, 0, 1047, 119, 1, 0, 0, 0, 1048, 1054, 3, 128, 64, 0, 1049, 1053, 3, 124, 62, 0, 1050, 1053, 3, 122, 61, 0, 1051, 1053, 3, 126, 63, 0, 1052, 1049, 1, 0, 0, 0, 1052, 1050, 1, 0, 0, 0, 1052, 1051, 1, 0, 0, 0, 1053, 1056, 1, 0, 0, 0, 1054, 1052, 1, 0, 0, 0, 1054, 1055, 1, 0, 0, 0, 1055, 121, 1, 0, 0, 0, 1056, 1054, 1, 0, 0, 0, 1057, 1058, 5, 131, 0, 0, 1058, 1060, 5, 82, 0, 0, 1059, 1061, 5, 131, 0, 0, 1060, 1059, 1, 0, 0, 0, 1060, 1061, 1, 0, 0, 0, 1061, 1062, 1, 0, 0, 0, 1062, 1083, 3, 128, 64, 0, 1063, 1065, 5, 131, 0, 0, 1064, 1063, 1, 0, 0, 0, 1064, 1065, 1, 0, 0, 0, 1065, 1066, 1, 0, 0, 0, 1066, 1067, 5, 8, 0, 0, 1067, 1068, 3, 100, 50, 0, 1068, 1069, 5, 9, 0, 0, 1069, 1083, 1, 0, 0, 0, 1070, 1072, 5, 131, 0, 0, 1071, 1070, 1, 0, 0, 0, 1071, 1072, 1, 0, 0, 0, 1072, 1073, 1, 0, 0, 0, 1073, 1075, 5, 8, 0, 0, 1074, 1076, 3, 100, 50, 0, 1075, 1074, 1, 0, 0, 0, 1075, 1076, 1, 0, 0, 0, 1076, 1077, 1, 0, 0, 0, 1077, 1079, 5, 12, 0, 0, 1078, 1080, 3, 100, 50, 0, 1079, 1078, 1, 0, 0, 0, 1079, 1080, 1, 0, 0, 0, 1080, 1081, 1, 0, 0, 0, 1081, 1083, 5, 9, 0, 0, 1082, 1057, 1, 0, 0, 0, 1082, 1064, 1, 0, 0, 0, 1082, 1071, 1, 0, 0, 0, 1083, 123, 1, 0, 0, 0, 1084, 1085, 5, 131, 0, 0, 1085, 1086, 5, 83, 0, 0, 1086, 1087, 5, 131, 0, 0, 1087, 1097, 5, 63, 0, 0, 1088, 1089, 5, 131, 0, 0, 1089, 1090, 5, 84, 0, 0, 1090, 1091, 5, 131, 0, 0, 1091, 1097, 5, 63, 0, 0, 1092, 1093, 5, 131, 0, 0, 1093, 1097, 5, 85, 0, 0, 1094, 1095, 5, 131, 0, 0, 1095, 1097, 5, 86, 0, 0, 1096, 1084, 1, 0, 0, 0, 1096, 1088, 1, 0, 0, 0, 1096, 1092, 1, 0, 0, 0, 1096, 1094, 1, 0, 0, 0, 1097, 1099, 1, 0, 0, 0, 1098, 1100, 5, 131, 0, 0, 1099, 1098, 1, 0, 0, 0, 1099, 1100, 1, 0, 0, 0, 1100, 1101, 1, 0, 0, 0, 1101, 1102, 3, 128, 64, 0, 1102, 125, 1, 0, 0, 0, 1103, 1104, 5, 131, 0, 0, 1104, 1105, 5, 87, 0, 0, 1105, 1106, 5, 131, 0, 0, 1106, 1114, 5, 88, 0, 0, 1107, 1108, 5, 131, 0, 0, 1108, 1109, 5, 87, 0, 0, 1109, 1110, 5, 131, 0, 0, 1110, 1111, 5, 81, 0, 0, 1111, 1112, 5, 131, 0, 0, 1112, 1114, 5, 88, 0, 0, 1113, 1103, 1, 0, 0, 0, 1113, 1107, 1, 0, 0, 0, 1114, 127, 1, 0, 0, 0, 1115, 1122, 3, 130, 65, 0, 1116, 1118, 5, 131, 0, 0, 1117, 1116, 1, 0, 0, 0, 1117, 1118, 1, 0, 0, 0, 1118, 1119, 1, 0, 0, 0, 1119, 1121, 3, 166, 83, 0, 1120, 1117, 1, 0, 0, 0, 1121, 1124, 1, 0, 0, 0, 1122, 1120, 1, 0, 0, 0, 1122, 1123, 1, 0, 0, 0, 1123, 1129, 1, 0, 0, 0, 1124, 1122, 1, 0, 0, 0, 1125, 1127, 5, 131, 0, 0, 1126, 1125, 1, 0, 0, 0, 1126, 1127, 1, 0, 0, 0, 1127, 1128, 1, 0, 0, 0, 1128, 1130, 3, 90, 45, 0, 1129, 1126, 1, 0, 0, 0, 1129, 1130, 1, 0, 0, 0, 1130, 129, 1, 0, 0, 0, 1131, 1210, 3, 132, 66, 0, 1132, 1210, 3, 178, 89, 0, 1133, 1210, 3, 168, 84, 0, 1134, 1136, 5, 89, 0, 0, 1135, 1137, 5, 131, 0, 0, 1136, 1135, 1, 0, 0, 0, 1136, 1137, 1, 0, 0, 0, 1137, 1138, 1, 0, 0, 0, 1138, 1140, 5, 6, 0, 0, 1139, 1141, 5, 131, 0, 0, 1140, 1139, 1, 0, 0, 0, 1140, 1141, 1, 0, 0, 0, 1141, 1142, 1, 0, 0, 0, 1142, 1144, 5, 5, 0, 0, 1143, 1145, 5, 131, 0, 0, 1144, 1143, 1, 0, 0, 0, 1144, 1145, 1, 0, 0, 0, 1145, 1146, 1, 0, 0, 0, 1146, 1210, 5, 7, 0, 0, 1147, 1210, 3, 162, 81, 0, 1148, 1210, 3, 164, 82, 0, 1149, 1151, 5, 49, 0, 0, 1150, 1152, 5, 131, 0, 0, 1151, 1150, 1, 0, 0, 0, 1151, 1152, 1, 0, 0, 0, 1152, 1153, 1, 0, 0, 0, 1153, 1155, 5, 6, 0, 0, 1154, 1156, 5, 131, 0, 0, 1155, 1154, 1, 0, 0, 0, 1155, 1156, 1, 0, 0, 0, 1156, 1157, 1, 0, 0, 0, 1157, 1159, 3, 144, 72, 0, 1158, 1160, 5, 131, 0, 0, 1159, 1158, 1, 0, 0, 0, 1159, 1160, 1, 0, 0, 0, 1160, 1161, 1, 0, 0, 0, 1161, 1162, 5, 7, 0, 0, 1162, 1210, 1, 0, 0, 0, 1163, 1165, 5, 90, 0, 0, 1164, 1166, 5, 131, 0, 0, 1165, 1164, 1, 0, 0, 0, 1165, 1166, 1, 0, 0, 0, 1166, 1167, 1, 0, 0, 0, 1167, 1169, 5, 6, 0, 0, 1168, 1170, 5, 131, 0, 0, 1169, 1168, 1, 0, 0, 0, 1169, 1170, 1, 0, 0, 0, 1170, 1171, 1, 0, 0, 0, 1171, 1173, 3, 144, 72, 0, 1172, 1174, 5, 131, 0, 0, 1173, 1172, 1, 0, 0, 0, 1173, 1174, 1, 0, 0, 0, 1174, 1175, 1, 0, 0, 0, 1175, 1176, 5, 7, 0, 0, 1176, 1210, 1, 0, 0, 0, 1177, 1179, 5, 91, 0, 0, 1178, 1180, 5, 131, 0, 0, 1179, 1178, 1, 0, 0, 0, 1179, 1180, 1, 0, 0, 0, 1180, 1181, 1, 0, 0, 0, 1181, 1183, 5, 6, 0, 0, 1182, 1184, 5, 131, 0, 0, 1183, 1182, 1, 0, 0, 0, 1183, 1184, 1, 0, 0, 0, 1184, 1185, 1, 0, 0, 0, 1185, 1187, 3, 144, 72, 0, 1186, 1188, 5, 131, 0, 0, 1187, 1186, 1, 0, 0, 0, 1187, 1188, 1, 0, 0, 0, 1188, 1189, 1, 0, 0, 0, 1189, 1190, 5, 7, 0, 0, 1190, 1210, 1, 0, 0, 0, 1191, 1193, 5, 92, 0, 0, 1192, 1194, 5, 131, 0, 0, 1193, 1192, 1, 0, 0, 0, 1193, 1194, 1, 0, 0, 0, 1194, 1195, 1, 0, 0, 0, 1195, 1197, 5, 6, 0, 0, 1196, 1198, 5, 131, 0, 0, 1197, 1196, 1, 0, 0, 0, 1197, 1198, 1, 0, 0, 0, 1198, 1199, 1, 0, 0, 0, 1199, 1201, 3, 144, 72, 0, 1200, 1202, 5, 131, 0, 0, 1201, 1200, 1, 0, 0, 0, 1201, 1202, 1, 0, 0, 0, 1202, 1203, 1, 0, 0, 0, 1203, 1204, 5, 7, 0, 0, 1204, 1210, 1, 0, 0, 0, 1205, 1210, 3, 142, 71, 0, 1206, 1210, 3, 140, 70, 0, 1207, 1210, 3, 148, 74, 0, 1208, 1210, 3, 172, 86, 0, 1209, 1131, 1, 0, 0, 0, 1209, 1132, 1, 0, 0, 0, 1209, 1133, 1, 0, 0, 0, 1209, 1134, 1, 0, 0, 0, 1209, 1147, 1, 0, 0, 0, 1209, 1148, 1, 0, 0, 0, 1209, 1149, 1, 0, 0, 0, 1209, 1163, 1, 0, 0, 0, 1209, 1177, 1, 0, 0, 0, 1209, 1191, 1, 0, 0, 0, 1209, 1205, 1, 0, 0, 0, 1209, 1206, 1, 0, 0, 0, 1209, 1207, 1, 0, 0, 0, 1209, 1208, 1, 0, 0, 0, 1210, 131, 1, 0, 0, 0, 1211, 1218, 3, 174, 87, 0, 1212, 1218, 5, 101, 0, 0, 1213, 1218, 3, 134, 67, 0, 1214, 1218, 5, 88, 0, 0, 1215, 1218, 3, 176, 88, 0, 1216, 1218, 3, 136, 68, 0, 1217, 1211, 1, 0, 0, 0, 1217, 1212, 1, 0, 0, 0, 1217, 1213, 1, 0, 0, 0, 1217, 1214, 1, 0, 0, 0, 1217, 1215, 1, 0, 0, 0, 1217, 1216, 1, 0, 0, 0, 1218, 133, 1, 0, 0, 0, 1219, 1220, 7, 2, 0, 0, 1220, 135, 1, 0, 0, 0, 1221, 1223, 5, 8, 0, 0, 1222, 1224, 5, 131, 0, 0, 1223, 1222, 1, 0, 0, 0, 1223, 1224, 1, 0, 0, 0, 1224, 1242, 1, 0, 0, 0, 1225, 1227, 3, 100, 50, 0, 1226, 1228, 5, 131, 0, 0, 1227, 1226, 1, 0, 0, 0, 1227, 1228, 1, 0, 0, 0, 1228, 1239, 1, 0, 0, 0, 1229, 1231, 5, 2, 0, 0, 1230, 1232, 5, 131, 0, 0, 1231, 1230, 1, 0, 0, 0, 1231, 1232, 1, 0, 0, 0, 1232, 1233, 1, 0, 0, 0, 1233, 1235, 3, 100, 50, 0, 1234, 1236, 5, 131, 0, 0, 1235, 1234, 1, 0, 0, 0, 1235, 1236, 1, 0, 0, 0, 1236, 1238, 1, 0, 0, 0, 1237, 1229, 1, 0, 0, 0, 1238, 1241, 1, 0, 0, 0, 1239, 1237, 1, 0, 0, 0, 1239, 1240, 1, 0, 0, 0, 1240, 1243, 1, 0, 0, 0, 1241, 1239, 1, 0, 0, 0, 1242, 1225, 1, 0, 0, 0, 1242, 1243, 1, 0, 0, 0, 1243, 1244, 1, 0, 0, 0, 1244, 1245, 5, 9, 0, 0, 1245, 137, 1, 0, 0, 0, 1246, 1248, 5, 3, 0, 0, 1247, 1249, 5, 131, 0, 0, 1248, 1247, 1, 0, 0, 0, 1248, 1249, 1, 0, 0, 0, 1249, 1250, 1, 0, 0, 0, 1250, 1277, 3, 112, 56, 0, 1251, 1253, 5, 18, 0, 0, 1252, 1254, 5, 131, 0, 0, 1253, 1252, 1, 0, 0, 0, 1253, 1254, 1, 0, 0, 0, 1254, 1255, 1, 0, 0, 0, 1255, 1277, 3, 112, 56, 0, 1256, 1258, 5, 19, 0, 0, 1257, 1259, 5, 131, 0, 0, 1258, 1257, 1, 0, 0, 0, 1258, 1259, 1, 0, 0, 0, 1259, 1260, 1, 0, 0, 0, 1260, 1277, 3, 112, 56, 0, 1261, 1263, 5, 20, 0, 0, 1262, 1264, 5, 131, 0, 0, 1263, 1262, 1, 0, 0, 0, 1263, 1264, 1, 0, 0, 0, 1264, 1265, 1, 0, 0, 0, 1265, 1277, 3, 112, 56, 0, 1266, 1268, 5, 21, 0, 0, 1267, 1269, 5, 131, 0, 0, 1268, 1267, 1, 0, 0, 0, 1268, 1269, 1, 0, 0, 0, 1269, 1270, 1, 0, 0, 0, 1270, 1277, 3, 112, 56, 0, 1271, 1273, 5, 22, 0, 0, 1272, 1274, 5, 131, 0, 0, 1273, 1272, 1, 0, 0, 0, 1273, 1274, 1, 0, 0, 0, 1274, 1275, 1, 0, 0, 0, 1275, 1277, 3, 112, 56, 0, 1276, 1246, 1, 0, 0, 0, 1276, 1251, 1, 0, 0, 0, 1276, 1256, 1, 0, 0, 0, 1276, 1261, 1, 0, 0, 0, 1276, 1266, 1, 0, 0, 0, 1276, 1271, 1, 0, 0, 0, 1277, 139, 1, 0, 0, 0, 1278, 1280, 5, 6, 0, 0, 1279, 1281, 5, 131, 0, 0, 1280, 1279, 1, 0, 0, 0, 1280, 1281, 1, 0, 0, 0, 1281, 1282, 1, 0, 0, 0, 1282, 1284, 3, 100, 50, 0, 1283, 1285, 5, 131, 0, 0, 1284, 1283, 1, 0, 0, 0, 1284, 1285, 1, 0, 0, 0, 1285, 1286, 1, 0, 0, 0, 1286, 1287, 5, 7, 0, 0, 1287, 141, 1, 0, 0, 0, 1288, 1293, 3, 78, 39, 0, 1289, 1291, 5, 131, 0, 0, 1290, 1289, 1, 0, 0, 0, 1290, 1291, 1, 0, 0, 0, 1291, 1292, 1, 0, 0, 0, 1292, 1294, 3, 80, 40, 0, 1293, 1290, 1, 0, 0, 0, 1294, 1295, 1, 0, 0, 0, 1295, 1293, 1, 0, 0, 0, 1295, 1296, 1, 0, 0, 0, 1296, 143, 1, 0, 0, 0, 1297, 1302, 3, 146, 73, 0, 1298, 1300, 5, 131, 0, 0, 1299, 1298, 1, 0, 0, 0, 1299, 1300, 1, 0, 0, 0, 1300, 1301, 1, 0, 0, 0, 1301, 1303, 3, 68, 34, 0, 1302, 1299, 1, 0, 0, 0, 1302, 1303, 1, 0, 0, 0, 1303, 145, 1, 0, 0, 0, 1304, 1305, 3, 172, 86, 0, 1305, 1306, 5, 131, 0, 0, 1306, 1307, 5, 82, 0, 0, 1307, 1308, 5, 131, 0, 0, 1308, 1309, 3, 100, 50, 0, 1309, 147, 1, 0, 0, 0, 1310, 1312, 3, 150, 75, 0, 1311, 1313, 5, 131, 0, 0, 1312, 1311, 1, 0, 0, 0, 1312, 1313, 1, 0, 0, 0, 1313, 1314, 1, 0, 0, 0, 1314, 1316, 5, 6, 0, 0, 1315, 1317, 5, 131, 0, 0, 1316, 1315, 1, 0, 0, 0, 1316, 1317, 1, 0, 0, 0, 1317, 1322, 1, 0, 0, 0, 1318, 1320, 5, 64, 0, 0, 1319, 1321, 5, 131, 0, 0, 1320, 1319, 1, 0, 0, 0, 1320, 1321, 1, 0, 0, 0, 1321, 1323, 1, 0, 0, 0, 1322, 1318, 1, 0, 0, 0, 1322, 1323, 1, 0, 0, 0, 1323, 1341, 1, 0, 0, 0, 1324, 1326, 3, 100, 50, 0, 1325, 1327, 5, 131, 0, 0, 1326, 1325, 1, 0, 0, 0, 1326, 1327, 1, 0, 0, 0, 1327, 1338, 1, 0, 0, 0, 1328, 1330, 5, 2, 0, 0, 1329, 1331, 5, 131, 0, 0, 1330, 1329, 1, 0, 0, 0, 1330, 1331, 1, 0, 0, 0, 1331, 1332, 1, 0, 0, 0, 1332, 1334, 3, 100, 50, 0, 1333, 1335, 5, 131, 0, 0, 1334, 1333, 1, 0, 0, 0, 1334, 1335, 1, 0, 0, 0, 1335, 1337, 1, 0, 0, 0, 1336, 1328, 1, 0, 0, 0, 1337, 1340, 1, 0, 0, 0, 1338, 1336, 1, 0, 0, 0, 1338, 1339, 1, 0, 0, 0, 1339, 1342, 1, 0, 0, 0, 1340, 1338, 1, 0, 0, 0, 1341, 1324, 1, 0, 0, 0, 1341, 1342, 1, 0, 0, 0, 1342, 1343, 1, 0, 0, 0, 1343, 1344, 5, 7, 0, 0, 1344, 149, 1, 0, 0, 0, 1345, 1346, 3, 160, 80, 0, 1346, 1347, 3, 190, 95, 0, 1347, 1350, 1, 0, 0, 0, 1348, 1350, 5, 95, 0, 0, 1349, 1345, 1, 0, 0, 0, 1349, 1348, 1, 0, 0, 0, 1350, 151, 1, 0, 0, 0, 1351, 1353, 3, 158, 79, 0, 1352, 1354, 5, 131, 0, 0, 1353, 1352, 1, 0, 0, 0, 1353, 1354, 1, 0, 0, 0, 1354, 1355, 1, 0, 0, 0, 1355, 1357, 5, 6, 0, 0, 1356, 1358, 5, 131, 0, 0, 1357, 1356, 1, 0, 0, 0, 1357, 1358, 1, 0, 0, 0, 1358, 1376, 1, 0, 0, 0, 1359, 1361, 3, 100, 50, 0, 1360, 1362, 5, 131, 0, 0, 1361, 1360, 1, 0, 0, 0, 1361, 1362, 1, 0, 0, 0, 1362, 1373, 1, 0, 0, 0, 1363, 1365, 5, 2, 0, 0, 1364, 1366, 5, 131, 0, 0, 1365, 1364, 1, 0, 0, 0, 1365, 1366, 1, 0, 0, 0, 1366, 1367, 1, 0, 0, 0, 1367, 1369, 3, 100, 50, 0, 1368, 1370, 5, 131, 0, 0, 1369, 1368, 1, 0, 0, 0, 1369, 1370, 1, 0, 0, 0, 1370, 1372, 1, 0, 0, 0, 1371, 1363, 1, 0, 0, 0, 1372, 1375, 1, 0, 0, 0, 1373, 1371, 1, 0, 0, 0, 1373, 1374, 1, 0, 0, 0, 1374, 1377, 1, 0, 0, 0, 1375, 1373, 1, 0, 0, 0, 1376, 1359, 1, 0, 0, 0, 1376, 1377, 1, 0, 0, 0, 1377, 1378, 1, 0, 0, 0, 1378, 1379, 5, 7, 0, 0, 1379, 153, 1, 0, 0, 0, 1380, 1381, 3, 158, 79, 0, 1381, 155, 1, 0, 0, 0, 1382, 1383, 3, 190, 95, 0, 1383, 157, 1, 0, 0, 0, 1384, 1385, 3, 160, 80, 0, 1385, 1386, 3, 190, 95, 0, 1386, 159, 1, 0, 0, 0, 1387, 1388, 3, 190, 95, 0, 1388, 1389, 5, 23, 0, 0, 1389, 1391, 1, 0, 0, 0, 1390, 1387, 1, 0, 0, 0, 1391, 1394, 1, 0, 0, 0, 1392, 1390, 1, 0, 0, 0, 1392, 1393, 1, 0, 0, 0, 1393, 161, 1, 0, 0, 0, 1394, 1392, 1, 0, 0, 0, 1395, 1397, 5, 8, 0, 0, 1396, 1398, 5, 131, 0, 0, 1397, 1396, 1, 0, 0, 0, 1397, 1398, 1, 0, 0, 0, 1398, 1399, 1, 0, 0, 0, 1399, 1408, 3, 144, 72, 0, 1400, 1402, 5, 131, 0, 0, 1401, 1400, 1, 0, 0, 0, 1401, 1402, 1, 0, 0, 0, 1402, 1403, 1, 0, 0, 0, 1403, 1405, 5, 11, 0, 0, 1404, 1406, 5, 131, 0, 0, 1405, 1404, 1, 0, 0, 0, 1405, 1406, 1, 0, 0, 0, 1406, 1407, 1, 0, 0, 0, 1407, 1409, 3, 100, 50, 0, 1408, 1401, 1, 0, 0, 0, 1408, 1409, 1, 0, 0, 0, 1409, 1411, 1, 0, 0, 0, 1410, 1412, 5, 131, 0, 0, 1411, 1410, 1, 0, 0, 0, 1411, 1412, 1, 0, 0, 0, 1412, 1413, 1, 0, 0, 0, 1413, 1414, 5, 9, 0, 0, 1414, 163, 1, 0, 0, 0, 1415, 1417, 5, 8, 0, 0, 1416, 1418, 5, 131, 0, 0, 1417, 1416, 1, 0, 0, 0, 1417, 1418, 1, 0, 0, 0, 1418, 1427, 1, 0, 0, 0, 1419, 1421, 3, 172, 86, 0, 1420, 1422, 5, 131, 0, 0, 1421, 1420, 1, 0, 0, 0, 1421, 1422, 1, 0, 0, 0, 1422, 1423, 1, 0, 0, 0, 1423, 1425, 5, 3, 0, 0, 1424, 1426, 5, 131, 0, 0, 1425, 1424, 1, 0, 0, 0, 1425, 1426, 1, 0, 0, 0, 1426, 1428, 1, 0, 0, 0, 1427, 1419, 1, 0, 0, 0, 1427, 1428, 1, 0, 0, 0, 1428, 1429, 1, 0, 0, 0, 1429, 1431, 3, 142, 71, 0, 1430, 1432, 5, 131, 0, 0, 1431, 1430, 1, 0, 0, 0, 1431, 1432, 1, 0, 0, 0, 1432, 1441, 1, 0, 0, 0, 1433, 1435, 5, 77, 0, 0, 1434, 1436, 5, 131, 0, 0, 1435, 1434, 1, 0, 0, 0, 1435, 1436, 1, 0, 0, 0, 1436, 1437, 1, 0, 0, 0, 1437, 1439, 3, 100, 50, 0, 1438, 1440, 5, 131, 0, 0, 1439, 1438, 1, 0, 0, 0, 1439, 1440, 1, 0, 0, 0, 1440, 1442, 1, 0, 0, 0, 1441, 1433, 1, 0, 0, 0, 1441, 1442, 1, 0, 0, 0, 1442, 1443, 1, 0, 0, 0, 1443, 1445, 5, 11, 0, 0, 1444, 1446, 5, 131, 0, 0, 1445, 1444, 1, 0, 0, 0, 1445, 1446, 1, 0, 0, 0, 1446, 1447, 1, 0, 0, 0, 1447, 1449, 3, 100, 50, 0, 1448, 1450, 5, 131, 0, 0, 1449, 1448, 1, 0, 0, 0, 1449, 1450, 1, 0, 0, 0, 1450, 1451, 1, 0, 0, 0, 1451, 1452, 5, 9, 0, 0, 1452, 165, 1, 0, 0, 0, 1453, 1455, 5, 23, 0, 0, 1454, 1456, 5, 131, 0, 0, 1455, 1454, 1, 0, 0, 0, 1455, 1456, 1, 0, 0, 0, 1456, 1457, 1, 0, 0, 0, 1457, 1458, 3, 182, 91, 0, 1458, 167, 1, 0, 0, 0, 1459, 1464, 5, 96, 0, 0, 1460, 1462, 5, 131, 0, 0, 1461, 1460, 1, 0, 0, 0, 1461, 1462, 1, 0, 0, 0, 1462, 1463, 1, 0, 0, 0, 1463, 1465, 3, 170, 85, 0, 1464, 1461, 1, 0, 0, 0, 1465, 1466, 1, 0, 0, 0, 1466, 1464, 1, 0, 0, 0, 1466, 1467, 1, 0, 0, 0, 1467, 1482, 1, 0, 0, 0, 1468, 1470, 5, 96, 0, 0, 1469, 1471, 5, 131, 0, 0, 1470, 1469, 1, 0, 0, 0, 1470, 1471, 1, 0, 0, 0, 1471, 1472, 1, 0, 0, 0, 1472, 1477, 3, 100, 50, 0, 1473, 1475, 5, 131, 0, 0, 1474, 1473, 1, 0, 0, 0, 1474, 1475, 1, 0, 0, 0, 1475, 1476, 1, 0, 0, 0, 1476, 1478, 3, 170, 85, 0, 1477, 1474, 1, 0, 0, 0, 1478, 1479, 1, 0, 0, 0, 1479, 1477, 1, 0, 0, 0, 1479, 1480, 1, 0, 0, 0, 1480, 1482, 1, 0, 0, 0, 1481, 1459, 1, 0, 0, 0, 1481, 1468, 1, 0, 0, 0, 1482, 1491, 1, 0, 0, 0, 1483, 1485, 5, 131, 0, 0, 1484, 1483, 1, 0, 0, 0, 1484, 1485, 1, 0, 0, 0, 1485, 1486, 1, 0, 0, 0, 1486, 1488, 5, 97, 0, 0, 1487, 1489, 5, 131, 0, 0, 1488, 1487, 1, 0, 0, 0, 1488, 1489, 1, 0, 0, 0, 1489, 1490, 1, 0, 0, 0, 1490, 1492, 3, 100, 50, 0, 1491, 1484, 1, 0, 0, 0, 1491, 1492, 1, 0, 0, 0, 1492, 1494, 1, 0, 0, 0, 1493, 1495, 5, 131, 0, 0, 1494, 1493, 1, 0, 0, 0, 1494, 1495, 1, 0, 0, 0, 1495, 1496, 1, 0, 0, 0, 1496, 1497, 5, 98, 0, 0, 1497, 169, 1, 0, 0, 0, 1498, 1500, 5, 99, 0, 0, 1499, 1501, 5, 131, 0, 0, 1500, 1499, 1, 0, 0, 0, 1500, 1501, 1, 0, 0, 0, 1501, 1502, 1, 0, 0, 0, 1502, 1504, 3, 100, 50, 0, 1503, 1505, 5, 131, 0, 0, 1504, 1503, 1, 0, 0, 0, 1504, 1505, 1, 0, 0, 0, 1505, 1506, 1, 0, 0, 0, 1506, 1508, 5, 100, 0, 0, 1507, 1509, 5, 131, 0, 0, 1508, 1507, 1, 0, 0, 0, 1508, 1509, 1, 0, 0, 0, 1509, 1510, 1, 0, 0, 0, 1510, 1511, 3, 100, 50, 0, 1511, 171, 1, 0, 0, 0, 1512, 1513, 3, 190, 95, 0, 1513, 173, 1, 0, 0, 0, 1514, 1517, 3, 186, 93, 0, 1515, 1517, 3, 184, 92, 0, 1516, 1514, 1, 0, 0, 0, 1516, 1515, 1, 0, 0, 0, 1517, 175, 1, 0, 0, 0, 1518, 1520, 5, 24, 0, 0, 1519, 1521, 5, 131, 0, 0, 1520, 1519, 1, 0, 0, 0, 1520, 1521, 1, 0, 0, 0, 1521, 1555, 1, 0, 0, 0, 1522, 1524, 3, 182, 91, 0, 1523, 1525, 5, 131, 0, 0, 1524, 1523, 1, 0, 0, 0, 1524, 1525, 1, 0, 0, 0, 1525, 1526, 1, 0, 0, 0, 1526, 1528, 5, 10, 0, 0, 1527, 1529, 5, 131, 0, 0, 1528, 1527, 1, 0, 0, 0, 1528, 1529, 1, 0, 0, 0, 1529, 1530, 1, 0, 0, 0, 1530, 1532, 3, 100, 50, 0, 1531, 1533, 5, 131, 0, 0, 1532, 1531, 1, 0, 0, 0, 1532, 1533, 1, 0, 0, 0, 1533, 1552, 1, 0, 0, 0, 1534, 1536, 5, 2, 0, 0, 1535, 1537, 5, 131, 0, 0, 1536, 1535, 1, 0, 0, 0, 1536, 1537, 1, 0, 0, 0, 1537, 1538, 1, 0, 0, 0, 1538, 1540, 3, 182, 91, 0, 1539, 1541, 5, 131, 0, 0, 1540, 1539, 1, 0, 0, 0, 1540, 1541, 1, 0, 0, 0, 1541, 1542, 1, 0, 0, 0, 1542, 1544, 5, 10, 0, 0, 1543, 1545, 5, 131, 0, 0, 1544, 1543, 1, 0, 0, 0, 1544, 1545, 1, 0, 0, 0, 1545, 1546, 1, 0, 0, 0, 1546, 1548, 3, 100, 50, 0, 1547, 1549, 5, 131, 0, 0, 1548, 1547, 1, 0, 0, 0, 1548, 1549, 1, 0, 0, 0, 1549, 1551, 1, 0, 0, 0, 1550, 1534, 1, 0, 0, 0, 1551, 1554, 1, 0, 0, 0, 1552, 1550, 1, 0, 0, 0, 1552, 1553, 1, 0, 0, 0, 1553, 1556, 1, 0, 0, 0, 1554, 1552, 1, 0, 0, 0, 1555, 1522, 1, 0, 0, 0, 1555, 1556, 1, 0, 0, 0, 1556, 1557, 1, 0, 0, 0, 1557, 1558, 5, 25, 0, 0, 1558, 177, 1, 0, 0, 0, 1559, 1562, 5, 26, 0, 0, 1560, 1563, 3, 190, 95, 0, 1561, 1563, 5, 104, 0, 0, 1562, 1560, 1, 0, 0, 0, 1562, 1561, 1, 0, 0, 0, 1563, 179, 1, 0, 0, 0, 1564, 1569, 3, 130, 65, 0, 1565, 1567, 5, 131, 0, 0, 1566, 1565, 1, 0, 0, 0, 1566, 1567, 1, 0, 0, 0, 1567, 1568, 1, 0, 0, 0, 1568, 1570, 3, 166, 83, 0, 1569, 1566, 1, 0, 0, 0, 1570, 1571, 1, 0, 0, 0, 1571, 1569, 1, 0, 0, 0, 1571, 1572, 1, 0, 0, 0, 1572, 181, 1, 0, 0, 0, 1573, 1574, 3, 188, 94, 0, 1574, 183, 1, 0, 0, 0, 1575, 1576, 7, 3, 0, 0, 1576, 185, 1, 0, 0, 0, 1577, 1578, 7, 4, 0, 0, 1578, 187, 1, 0, 0, 0, 1579, 1582, 3, 190, 95, 0, 1580, 1582, 3, 192, 96, 0, 1581, 1579, 1, 0, 0, 0, 1581, 1580, 1, 0, 0, 0, 1582, 189, 1, 0, 0, 0, 1583, 1584, 7, 5, 0, 0, 1584, 191, 1, 0, 0, 0, 1585, 1586, 7, 6, 0, 0, 1586, 193, 1, 0, 0, 0, 1587, 1588, 7, 7, 0, 0, 1588, 195, 1, 0, 0, 0, 1589, 1590, 7, 8, 0, 0, 1590, 197, 1, 0, 0, 0, 1591, 1592, 7, 9, 0, 0, 1592, 199, 1, 0, 0, 0, 293, 201, 205, 208, 211, 218, 223, 226, 230, 234, 239, 246, 251, 254, 258, 262, 266, 272, 276, 281, 286, 290, 293, 295, 299, 303, 308, 312, 317, 321, 330, 335, 339, 343, 347, 352, 356, 359, 363, 373, 380, 393, 397, 403, 407, 411, 416, 421, 425, 431, 435, 441, 445, 451, 455, 459, 463, 467, 471, 476, 483, 487, 492, 499, 505, 510, 516, 522, 527, 531, 536, 539, 542, 545, 552, 558, 561, 566, 569, 573, 576, 584, 588, 592, 596, 600, 605, 610, 614, 619, 622, 631, 640, 645, 658, 661, 677, 685, 689, 694, 699, 703, 708, 714, 719, 726, 730, 734, 736, 740, 742, 746, 748, 754, 760, 764, 767, 770, 774, 780, 784, 787, 790, 796, 799, 802, 806, 812, 815, 818, 822, 826, 830, 832, 836, 838, 841, 845, 847, 853, 857, 861, 865, 868, 873, 878, 883, 888, 894, 898, 900, 904, 908, 910, 912, 927, 937, 947, 952, 956, 963, 968, 973, 977, 981, 985, 988, 990, 995, 999, 1003, 1007, 1011, 1015, 1018, 1020, 1025, 1029, 1034, 1039, 1043, 1052, 1054, 1060, 1064, 1071, 1075, 1079, 1082, 1096, 1099, 1113, 1117, 1122, 1126, 1129, 1136, 1140, 1144, 1151, 1155, 1159, 1165, 1169, 1173, 1179, 1183, 1187, 1193, 1197, 1201, 1209, 1217, 1223, 1227, 1231, 1235, 1239, 1242, 1248, 1253, 1258, 1263, 1268, 1273, 1276, 1280, 1284, 1290, 1295, 1299, 1302, 1312, 1316, 1320, 1322, 1326, 1330, 1334, 1338, 1341, 1349, 1353, 1357, 1361, 1365, 1369, 1373, 1376, 1392, 1397, 1401, 1405, 1408, 1411, 1417, 1421, 1425, 1427, 1431, 1435, 1439, 1441, 1445, 1449, 1455, 1461, 1466, 1470, 1474, 1479, 1481, 1484, 1488, 1491, 1494, 1500, 1504, 1508, 1516, 1520, 1524, 1528, 1532, 1536, 1540, 1544, 1548, 1552, 1555, 1562, 1566, 1571, 1581]
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.tokens b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.tokens
new file mode 100644
index 00000000..856ae310
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/Lcypher.tokens
@@ -0,0 +1,179 @@
+T__0=1
+T__1=2
+T__2=3
+T__3=4
+T__4=5
+T__5=6
+T__6=7
+T__7=8
+T__8=9
+T__9=10
+T__10=11
+T__11=12
+T__12=13
+T__13=14
+T__14=15
+T__15=16
+T__16=17
+T__17=18
+T__18=19
+T__19=20
+T__20=21
+T__21=22
+T__22=23
+T__23=24
+T__24=25
+T__25=26
+T__26=27
+T__27=28
+T__28=29
+T__29=30
+T__30=31
+T__31=32
+T__32=33
+T__33=34
+T__34=35
+T__35=36
+T__36=37
+T__37=38
+T__38=39
+T__39=40
+T__40=41
+T__41=42
+T__42=43
+T__43=44
+T__44=45
+EXPLAIN=46
+PROFILE=47
+UNION=48
+ALL=49
+OPTIONAL_=50
+MATCH=51
+UNWIND=52
+AS=53
+MERGE=54
+ON=55
+CREATE=56
+SET=57
+DETACH=58
+DELETE_=59
+REMOVE=60
+CALL=61
+YIELD=62
+WITH=63
+DISTINCT=64
+RETURN=65
+ORDER=66
+BY=67
+L_SKIP=68
+LIMIT=69
+ASCENDING=70
+ASC=71
+DESCENDING=72
+DESC=73
+USING=74
+JOIN=75
+START=76
+WHERE=77
+OR=78
+XOR=79
+AND=80
+NOT=81
+IN=82
+STARTS=83
+ENDS=84
+CONTAINS=85
+REGEXP=86
+IS=87
+NULL_=88
+COUNT=89
+ANY=90
+NONE=91
+SINGLE=92
+TRUE_=93
+FALSE_=94
+EXISTS=95
+CASE=96
+ELSE=97
+END=98
+WHEN=99
+THEN=100
+StringLiteral=101
+EscapedChar=102
+HexInteger=103
+DecimalInteger=104
+OctalInteger=105
+HexLetter=106
+HexDigit=107
+Digit=108
+NonZeroDigit=109
+NonZeroOctDigit=110
+OctDigit=111
+ZeroDigit=112
+ExponentDecimalReal=113
+RegularDecimalReal=114
+FILTER=115
+EXTRACT=116
+UnescapedSymbolicName=117
+CONSTRAINT=118
+DO=119
+FOR=120
+REQUIRE=121
+UNIQUE=122
+MANDATORY=123
+SCALAR=124
+OF=125
+ADD=126
+DROP=127
+IdentifierStart=128
+IdentifierPart=129
+EscapedSymbolicName=130
+SP=131
+WHITESPACE=132
+Comment=133
+';'=1
+','=2
+'='=3
+'+='=4
+'*'=5
+'('=6
+')'=7
+'['=8
+']'=9
+':'=10
+'|'=11
+'..'=12
+'+'=13
+'-'=14
+'/'=15
+'%'=16
+'^'=17
+'<>'=18
+'<'=19
+'>'=20
+'<='=21
+'>='=22
+'.'=23
+'{'=24
+'}'=25
+'$'=26
+'⟨'=27
+'〈'=28
+'﹤'=29
+'<'=30
+'⟩'=31
+'〉'=32
+'﹥'=33
+'>'=34
+''=35
+'‐'=36
+'‑'=37
+'‒'=38
+'–'=39
+'—'=40
+'―'=41
+'−'=42
+'﹘'=43
+'﹣'=44
+'-'=45
+'0'=112
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.interp b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.interp
new file mode 100644
index 00000000..85d5663a
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.interp
@@ -0,0 +1,436 @@
+token literal names:
+null
+';'
+','
+'='
+'+='
+'*'
+'('
+')'
+'['
+']'
+':'
+'|'
+'..'
+'+'
+'-'
+'/'
+'%'
+'^'
+'<>'
+'<'
+'>'
+'<='
+'>='
+'.'
+'{'
+'}'
+'$'
+'⟨'
+'〈'
+'﹤'
+'<'
+'⟩'
+'〉'
+'﹥'
+'>'
+''
+'‐'
+'‑'
+'‒'
+'–'
+'—'
+'―'
+'−'
+'﹘'
+'﹣'
+'-'
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+'0'
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+
+token symbolic names:
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+null
+EXPLAIN
+PROFILE
+UNION
+ALL
+OPTIONAL_
+MATCH
+UNWIND
+AS
+MERGE
+ON
+CREATE
+SET
+DETACH
+DELETE_
+REMOVE
+CALL
+YIELD
+WITH
+DISTINCT
+RETURN
+ORDER
+BY
+L_SKIP
+LIMIT
+ASCENDING
+ASC
+DESCENDING
+DESC
+USING
+JOIN
+START
+WHERE
+OR
+XOR
+AND
+NOT
+IN
+STARTS
+ENDS
+CONTAINS
+REGEXP
+IS
+NULL_
+COUNT
+ANY
+NONE
+SINGLE
+TRUE_
+FALSE_
+EXISTS
+CASE
+ELSE
+END
+WHEN
+THEN
+StringLiteral
+EscapedChar
+HexInteger
+DecimalInteger
+OctalInteger
+HexLetter
+HexDigit
+Digit
+NonZeroDigit
+NonZeroOctDigit
+OctDigit
+ZeroDigit
+ExponentDecimalReal
+RegularDecimalReal
+FILTER
+EXTRACT
+UnescapedSymbolicName
+CONSTRAINT
+DO
+FOR
+REQUIRE
+UNIQUE
+MANDATORY
+SCALAR
+OF
+ADD
+DROP
+IdentifierStart
+IdentifierPart
+EscapedSymbolicName
+SP
+WHITESPACE
+Comment
+
+rule names:
+T__0
+T__1
+T__2
+T__3
+T__4
+T__5
+T__6
+T__7
+T__8
+T__9
+T__10
+T__11
+T__12
+T__13
+T__14
+T__15
+T__16
+T__17
+T__18
+T__19
+T__20
+T__21
+T__22
+T__23
+T__24
+T__25
+T__26
+T__27
+T__28
+T__29
+T__30
+T__31
+T__32
+T__33
+T__34
+T__35
+T__36
+T__37
+T__38
+T__39
+T__40
+T__41
+T__42
+T__43
+T__44
+EXPLAIN
+PROFILE
+UNION
+ALL
+OPTIONAL_
+MATCH
+UNWIND
+AS
+MERGE
+ON
+CREATE
+SET
+DETACH
+DELETE_
+REMOVE
+CALL
+YIELD
+WITH
+DISTINCT
+RETURN
+ORDER
+BY
+L_SKIP
+LIMIT
+ASCENDING
+ASC
+DESCENDING
+DESC
+USING
+JOIN
+START
+WHERE
+OR
+XOR
+AND
+NOT
+IN
+STARTS
+ENDS
+CONTAINS
+REGEXP
+IS
+NULL_
+COUNT
+ANY
+NONE
+SINGLE
+TRUE_
+FALSE_
+EXISTS
+CASE
+ELSE
+END
+WHEN
+THEN
+StringLiteral
+EscapedChar
+HexInteger
+DecimalInteger
+OctalInteger
+HexLetter
+HexDigit
+Digit
+NonZeroDigit
+NonZeroOctDigit
+OctDigit
+ZeroDigit
+ExponentDecimalReal
+RegularDecimalReal
+FILTER
+EXTRACT
+UnescapedSymbolicName
+CONSTRAINT
+DO
+FOR
+REQUIRE
+UNIQUE
+MANDATORY
+SCALAR
+OF
+ADD
+DROP
+IdentifierStart
+IdentifierPart
+EscapedSymbolicName
+SP
+WHITESPACE
+Comment
+FF
+EscapedSymbolicName_0
+RS
+ID_Continue
+Comment_1
+StringLiteral_1
+Comment_3
+Comment_2
+GS
+FS
+CR
+Sc
+SPACE
+Pc
+TAB
+StringLiteral_0
+LF
+VT
+US
+ID_Start
+
+channel names:
+DEFAULT_TOKEN_CHANNEL
+HIDDEN
+
+mode names:
+DEFAULT_MODE
+
+atn:
+[4, 0, 133, 1043, 6, -1, 2, 0, 7, 0, 2, 1, 7, 1, 2, 2, 7, 2, 2, 3, 7, 3, 2, 4, 7, 4, 2, 5, 7, 5, 2, 6, 7, 6, 2, 7, 7, 7, 2, 8, 7, 8, 2, 9, 7, 9, 2, 10, 7, 10, 2, 11, 7, 11, 2, 12, 7, 12, 2, 13, 7, 13, 2, 14, 7, 14, 2, 15, 7, 15, 2, 16, 7, 16, 2, 17, 7, 17, 2, 18, 7, 18, 2, 19, 7, 19, 2, 20, 7, 20, 2, 21, 7, 21, 2, 22, 7, 22, 2, 23, 7, 23, 2, 24, 7, 24, 2, 25, 7, 25, 2, 26, 7, 26, 2, 27, 7, 27, 2, 28, 7, 28, 2, 29, 7, 29, 2, 30, 7, 30, 2, 31, 7, 31, 2, 32, 7, 32, 2, 33, 7, 33, 2, 34, 7, 34, 2, 35, 7, 35, 2, 36, 7, 36, 2, 37, 7, 37, 2, 38, 7, 38, 2, 39, 7, 39, 2, 40, 7, 40, 2, 41, 7, 41, 2, 42, 7, 42, 2, 43, 7, 43, 2, 44, 7, 44, 2, 45, 7, 45, 2, 46, 7, 46, 2, 47, 7, 47, 2, 48, 7, 48, 2, 49, 7, 49, 2, 50, 7, 50, 2, 51, 7, 51, 2, 52, 7, 52, 2, 53, 7, 53, 2, 54, 7, 54, 2, 55, 7, 55, 2, 56, 7, 56, 2, 57, 7, 57, 2, 58, 7, 58, 2, 59, 7, 59, 2, 60, 7, 60, 2, 61, 7, 61, 2, 62, 7, 62, 2, 63, 7, 63, 2, 64, 7, 64, 2, 65, 7, 65, 2, 66, 7, 66, 2, 67, 7, 67, 2, 68, 7, 68, 2, 69, 7, 69, 2, 70, 7, 70, 2, 71, 7, 71, 2, 72, 7, 72, 2, 73, 7, 73, 2, 74, 7, 74, 2, 75, 7, 75, 2, 76, 7, 76, 2, 77, 7, 77, 2, 78, 7, 78, 2, 79, 7, 79, 2, 80, 7, 80, 2, 81, 7, 81, 2, 82, 7, 82, 2, 83, 7, 83, 2, 84, 7, 84, 2, 85, 7, 85, 2, 86, 7, 86, 2, 87, 7, 87, 2, 88, 7, 88, 2, 89, 7, 89, 2, 90, 7, 90, 2, 91, 7, 91, 2, 92, 7, 92, 2, 93, 7, 93, 2, 94, 7, 94, 2, 95, 7, 95, 2, 96, 7, 96, 2, 97, 7, 97, 2, 98, 7, 98, 2, 99, 7, 99, 2, 100, 7, 100, 2, 101, 7, 101, 2, 102, 7, 102, 2, 103, 7, 103, 2, 104, 7, 104, 2, 105, 7, 105, 2, 106, 7, 106, 2, 107, 7, 107, 2, 108, 7, 108, 2, 109, 7, 109, 2, 110, 7, 110, 2, 111, 7, 111, 2, 112, 7, 112, 2, 113, 7, 113, 2, 114, 7, 114, 2, 115, 7, 115, 2, 116, 7, 116, 2, 117, 7, 117, 2, 118, 7, 118, 2, 119, 7, 119, 2, 120, 7, 120, 2, 121, 7, 121, 2, 122, 7, 122, 2, 123, 7, 123, 2, 124, 7, 124, 2, 125, 7, 125, 2, 126, 7, 126, 2, 127, 7, 127, 2, 128, 7, 128, 2, 129, 7, 129, 2, 130, 7, 130, 2, 131, 7, 131, 2, 132, 7, 132, 2, 133, 7, 133, 2, 134, 7, 134, 2, 135, 7, 135, 2, 136, 7, 136, 2, 137, 7, 137, 2, 138, 7, 138, 2, 139, 7, 139, 2, 140, 7, 140, 2, 141, 7, 141, 2, 142, 7, 142, 2, 143, 7, 143, 2, 144, 7, 144, 2, 145, 7, 145, 2, 146, 7, 146, 2, 147, 7, 147, 2, 148, 7, 148, 2, 149, 7, 149, 2, 150, 7, 150, 2, 151, 7, 151, 2, 152, 7, 152, 1, 0, 1, 0, 1, 1, 1, 1, 1, 2, 1, 2, 1, 3, 1, 3, 1, 3, 1, 4, 1, 4, 1, 5, 1, 5, 1, 6, 1, 6, 1, 7, 1, 7, 1, 8, 1, 8, 1, 9, 1, 9, 1, 10, 1, 10, 1, 11, 1, 11, 1, 11, 1, 12, 1, 12, 1, 13, 1, 13, 1, 14, 1, 14, 1, 15, 1, 15, 1, 16, 1, 16, 1, 17, 1, 17, 1, 17, 1, 18, 1, 18, 1, 19, 1, 19, 1, 20, 1, 20, 1, 20, 1, 21, 1, 21, 1, 21, 1, 22, 1, 22, 1, 23, 1, 23, 1, 24, 1, 24, 1, 25, 1, 25, 1, 26, 1, 26, 1, 27, 1, 27, 1, 28, 1, 28, 1, 29, 1, 29, 1, 30, 1, 30, 1, 31, 1, 31, 1, 32, 1, 32, 1, 33, 1, 33, 1, 34, 1, 34, 1, 35, 1, 35, 1, 36, 1, 36, 1, 37, 1, 37, 1, 38, 1, 38, 1, 39, 1, 39, 1, 40, 1, 40, 1, 41, 1, 41, 1, 42, 1, 42, 1, 43, 1, 43, 1, 44, 1, 44, 1, 45, 1, 45, 1, 45, 1, 45, 1, 45, 1, 45, 1, 45, 1, 45, 1, 46, 1, 46, 1, 46, 1, 46, 1, 46, 1, 46, 1, 46, 1, 46, 1, 47, 1, 47, 1, 47, 1, 47, 1, 47, 1, 47, 1, 48, 1, 48, 1, 48, 1, 48, 1, 49, 1, 49, 1, 49, 1, 49, 1, 49, 1, 49, 1, 49, 1, 49, 1, 49, 1, 50, 1, 50, 1, 50, 1, 50, 1, 50, 1, 50, 1, 51, 1, 51, 1, 51, 1, 51, 1, 51, 1, 51, 1, 51, 1, 52, 1, 52, 1, 52, 1, 53, 1, 53, 1, 53, 1, 53, 1, 53, 1, 53, 1, 54, 1, 54, 1, 54, 1, 55, 1, 55, 1, 55, 1, 55, 1, 55, 1, 55, 1, 55, 1, 56, 1, 56, 1, 56, 1, 56, 1, 57, 1, 57, 1, 57, 1, 57, 1, 57, 1, 57, 1, 57, 1, 58, 1, 58, 1, 58, 1, 58, 1, 58, 1, 58, 1, 58, 1, 59, 1, 59, 1, 59, 1, 59, 1, 59, 1, 59, 1, 59, 1, 60, 1, 60, 1, 60, 1, 60, 1, 60, 1, 61, 1, 61, 1, 61, 1, 61, 1, 61, 1, 61, 1, 62, 1, 62, 1, 62, 1, 62, 1, 62, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 63, 1, 64, 1, 64, 1, 64, 1, 64, 1, 64, 1, 64, 1, 64, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 1, 65, 1, 66, 1, 66, 1, 66, 1, 67, 1, 67, 1, 67, 1, 67, 1, 67, 1, 68, 1, 68, 1, 68, 1, 68, 1, 68, 1, 68, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 69, 1, 70, 1, 70, 1, 70, 1, 70, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 71, 1, 72, 1, 72, 1, 72, 1, 72, 1, 72, 1, 73, 1, 73, 1, 73, 1, 73, 1, 73, 1, 73, 1, 74, 1, 74, 1, 74, 1, 74, 1, 74, 1, 75, 1, 75, 1, 75, 1, 75, 1, 75, 1, 75, 1, 76, 1, 76, 1, 76, 1, 76, 1, 76, 1, 76, 1, 77, 1, 77, 1, 77, 1, 78, 1, 78, 1, 78, 1, 78, 1, 79, 1, 79, 1, 79, 1, 79, 1, 80, 1, 80, 1, 80, 1, 80, 1, 81, 1, 81, 1, 81, 1, 82, 1, 82, 1, 82, 1, 82, 1, 82, 1, 82, 1, 82, 1, 83, 1, 83, 1, 83, 1, 83, 1, 83, 1, 84, 1, 84, 1, 84, 1, 84, 1, 84, 1, 84, 1, 84, 1, 84, 1, 84, 1, 85, 1, 85, 1, 85, 1, 85, 1, 85, 1, 85, 1, 85, 1, 86, 1, 86, 1, 86, 1, 87, 1, 87, 1, 87, 1, 87, 1, 87, 1, 88, 1, 88, 1, 88, 1, 88, 1, 88, 1, 88, 1, 89, 1, 89, 1, 89, 1, 89, 1, 90, 1, 90, 1, 90, 1, 90, 1, 90, 1, 91, 1, 91, 1, 91, 1, 91, 1, 91, 1, 91, 1, 91, 1, 92, 1, 92, 1, 92, 1, 92, 1, 92, 1, 93, 1, 93, 1, 93, 1, 93, 1, 93, 1, 93, 1, 94, 1, 94, 1, 94, 1, 94, 1, 94, 1, 94, 1, 94, 1, 95, 1, 95, 1, 95, 1, 95, 1, 95, 1, 96, 1, 96, 1, 96, 1, 96, 1, 96, 1, 97, 1, 97, 1, 97, 1, 97, 1, 98, 1, 98, 1, 98, 1, 98, 1, 98, 1, 99, 1, 99, 1, 99, 1, 99, 1, 99, 1, 100, 1, 100, 1, 100, 5, 100, 721, 8, 100, 10, 100, 12, 100, 724, 9, 100, 1, 100, 1, 100, 1, 100, 1, 100, 5, 100, 730, 8, 100, 10, 100, 12, 100, 733, 9, 100, 1, 100, 3, 100, 736, 8, 100, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 1, 101, 3, 101, 756, 8, 101, 1, 102, 1, 102, 1, 102, 1, 102, 4, 102, 762, 8, 102, 11, 102, 12, 102, 763, 1, 103, 1, 103, 1, 103, 5, 103, 769, 8, 103, 10, 103, 12, 103, 772, 9, 103, 3, 103, 774, 8, 103, 1, 104, 1, 104, 4, 104, 778, 8, 104, 11, 104, 12, 104, 779, 1, 105, 3, 105, 783, 8, 105, 1, 106, 1, 106, 3, 106, 787, 8, 106, 1, 107, 1, 107, 3, 107, 791, 8, 107, 1, 108, 1, 108, 3, 108, 795, 8, 108, 1, 109, 1, 109, 1, 110, 1, 110, 3, 110, 801, 8, 110, 1, 111, 1, 111, 1, 112, 4, 112, 806, 8, 112, 11, 112, 12, 112, 807, 1, 112, 4, 112, 811, 8, 112, 11, 112, 12, 112, 812, 1, 112, 1, 112, 4, 112, 817, 8, 112, 11, 112, 12, 112, 818, 1, 112, 1, 112, 4, 112, 823, 8, 112, 11, 112, 12, 112, 824, 3, 112, 827, 8, 112, 1, 112, 1, 112, 3, 112, 831, 8, 112, 1, 112, 4, 112, 834, 8, 112, 11, 112, 12, 112, 835, 1, 113, 5, 113, 839, 8, 113, 10, 113, 12, 113, 842, 9, 113, 1, 113, 1, 113, 4, 113, 846, 8, 113, 11, 113, 12, 113, 847, 1, 114, 1, 114, 1, 114, 1, 114, 1, 114, 1, 114, 1, 114, 1, 115, 1, 115, 1, 115, 1, 115, 1, 115, 1, 115, 1, 115, 1, 115, 1, 116, 1, 116, 5, 116, 867, 8, 116, 10, 116, 12, 116, 870, 9, 116, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 117, 1, 118, 1, 118, 1, 118, 1, 119, 1, 119, 1, 119, 1, 119, 1, 120, 1, 120, 1, 120, 1, 120, 1, 120, 1, 120, 1, 120, 1, 120, 1, 121, 1, 121, 1, 121, 1, 121, 1, 121, 1, 121, 1, 121, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 122, 1, 123, 1, 123, 1, 123, 1, 123, 1, 123, 1, 123, 1, 123, 1, 124, 1, 124, 1, 124, 1, 125, 1, 125, 1, 125, 1, 125, 1, 126, 1, 126, 1, 126, 1, 126, 1, 126, 1, 127, 1, 127, 3, 127, 936, 8, 127, 1, 128, 1, 128, 3, 128, 940, 8, 128, 1, 129, 1, 129, 5, 129, 944, 8, 129, 10, 129, 12, 129, 947, 9, 129, 1, 129, 4, 129, 950, 8, 129, 11, 129, 12, 129, 951, 1, 130, 4, 130, 955, 8, 130, 11, 130, 12, 130, 956, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 1, 131, 3, 131, 971, 8, 131, 1, 132, 1, 132, 1, 132, 1, 132, 1, 132, 1, 132, 5, 132, 979, 8, 132, 10, 132, 12, 132, 982, 9, 132, 1, 132, 1, 132, 1, 132, 1, 132, 1, 132, 1, 132, 5, 132, 990, 8, 132, 10, 132, 12, 132, 993, 9, 132, 1, 132, 3, 132, 996, 8, 132, 1, 132, 1, 132, 3, 132, 1000, 8, 132, 3, 132, 1002, 8, 132, 1, 133, 1, 133, 1, 134, 1, 134, 1, 135, 1, 135, 1, 136, 1, 136, 1, 137, 1, 137, 1, 138, 1, 138, 1, 139, 1, 139, 1, 140, 1, 140, 1, 141, 1, 141, 1, 142, 1, 142, 1, 143, 1, 143, 1, 144, 1, 144, 1, 145, 1, 145, 1, 146, 1, 146, 1, 147, 1, 147, 1, 148, 1, 148, 1, 149, 1, 149, 1, 150, 1, 150, 1, 151, 1, 151, 1, 152, 1, 152, 0, 0, 153, 1, 1, 3, 2, 5, 3, 7, 4, 9, 5, 11, 6, 13, 7, 15, 8, 17, 9, 19, 10, 21, 11, 23, 12, 25, 13, 27, 14, 29, 15, 31, 16, 33, 17, 35, 18, 37, 19, 39, 20, 41, 21, 43, 22, 45, 23, 47, 24, 49, 25, 51, 26, 53, 27, 55, 28, 57, 29, 59, 30, 61, 31, 63, 32, 65, 33, 67, 34, 69, 35, 71, 36, 73, 37, 75, 38, 77, 39, 79, 40, 81, 41, 83, 42, 85, 43, 87, 44, 89, 45, 91, 46, 93, 47, 95, 48, 97, 49, 99, 50, 101, 51, 103, 52, 105, 53, 107, 54, 109, 55, 111, 56, 113, 57, 115, 58, 117, 59, 119, 60, 121, 61, 123, 62, 125, 63, 127, 64, 129, 65, 131, 66, 133, 67, 135, 68, 137, 69, 139, 70, 141, 71, 143, 72, 145, 73, 147, 74, 149, 75, 151, 76, 153, 77, 155, 78, 157, 79, 159, 80, 161, 81, 163, 82, 165, 83, 167, 84, 169, 85, 171, 86, 173, 87, 175, 88, 177, 89, 179, 90, 181, 91, 183, 92, 185, 93, 187, 94, 189, 95, 191, 96, 193, 97, 195, 98, 197, 99, 199, 100, 201, 101, 203, 102, 205, 103, 207, 104, 209, 105, 211, 106, 213, 107, 215, 108, 217, 109, 219, 110, 221, 111, 223, 112, 225, 113, 227, 114, 229, 115, 231, 116, 233, 117, 235, 118, 237, 119, 239, 120, 241, 121, 243, 122, 245, 123, 247, 124, 249, 125, 251, 126, 253, 127, 255, 128, 257, 129, 259, 130, 261, 131, 263, 132, 265, 133, 267, 0, 269, 0, 271, 0, 273, 0, 275, 0, 277, 0, 279, 0, 281, 0, 283, 0, 285, 0, 287, 0, 289, 0, 291, 0, 293, 0, 295, 0, 297, 0, 299, 0, 301, 0, 303, 0, 305, 0, 1, 0, 48, 2, 0, 69, 69, 101, 101, 2, 0, 88, 88, 120, 120, 2, 0, 80, 80, 112, 112, 2, 0, 76, 76, 108, 108, 2, 0, 65, 65, 97, 97, 2, 0, 73, 73, 105, 105, 2, 0, 78, 78, 110, 110, 2, 0, 82, 82, 114, 114, 2, 0, 79, 79, 111, 111, 2, 0, 70, 70, 102, 102, 2, 0, 85, 85, 117, 117, 2, 0, 84, 84, 116, 116, 2, 0, 77, 77, 109, 109, 2, 0, 67, 67, 99, 99, 2, 0, 72, 72, 104, 104, 2, 0, 87, 87, 119, 119, 2, 0, 68, 68, 100, 100, 2, 0, 83, 83, 115, 115, 2, 0, 71, 71, 103, 103, 2, 0, 86, 86, 118, 118, 2, 0, 89, 89, 121, 121, 2, 0, 66, 66, 98, 98, 2, 0, 75, 75, 107, 107, 2, 0, 74, 74, 106, 106, 13, 0, 34, 34, 39, 39, 66, 66, 70, 70, 78, 78, 82, 82, 84, 84, 92, 92, 98, 98, 102, 102, 110, 110, 114, 114, 116, 116, 2, 0, 65, 70, 97, 102, 2, 0, 81, 81, 113, 113, 8, 0, 160, 160, 5760, 5760, 6158, 6158, 8192, 8202, 8232, 8233, 8239, 8239, 8287, 8287, 12288, 12288, 1, 0, 12, 12, 2, 0, 0, 95, 97, 65535, 1, 0, 30, 30, 429, 0, 48, 57, 65, 90, 95, 95, 97, 122, 170, 170, 181, 181, 183, 183, 186, 186, 192, 214, 216, 246, 248, 705, 710, 721, 736, 740, 748, 748, 750, 750, 768, 884, 886, 887, 890, 893, 902, 906, 908, 908, 910, 929, 931, 1013, 1015, 1153, 1155, 1159, 1162, 1319, 1329, 1366, 1369, 1369, 1377, 1415, 1425, 1469, 1471, 1471, 1473, 1474, 1476, 1477, 1479, 1479, 1488, 1514, 1520, 1522, 1552, 1562, 1568, 1641, 1646, 1747, 1749, 1756, 1759, 1768, 1770, 1788, 1791, 1791, 1808, 1866, 1869, 1969, 1984, 2037, 2042, 2042, 2048, 2093, 2112, 2139, 2208, 2208, 2210, 2220, 2276, 2302, 2304, 2403, 2406, 2415, 2417, 2423, 2425, 2431, 2433, 2435, 2437, 2444, 2447, 2448, 2451, 2472, 2474, 2480, 2482, 2482, 2486, 2489, 2492, 2500, 2503, 2504, 2507, 2510, 2519, 2519, 2524, 2525, 2527, 2531, 2534, 2545, 2561, 2563, 2565, 2570, 2575, 2576, 2579, 2600, 2602, 2608, 2610, 2611, 2613, 2614, 2616, 2617, 2620, 2620, 2622, 2626, 2631, 2632, 2635, 2637, 2641, 2641, 2649, 2652, 2654, 2654, 2662, 2677, 2689, 2691, 2693, 2701, 2703, 2705, 2707, 2728, 2730, 2736, 2738, 2739, 2741, 2745, 2748, 2757, 2759, 2761, 2763, 2765, 2768, 2768, 2784, 2787, 2790, 2799, 2817, 2819, 2821, 2828, 2831, 2832, 2835, 2856, 2858, 2864, 2866, 2867, 2869, 2873, 2876, 2884, 2887, 2888, 2891, 2893, 2902, 2903, 2908, 2909, 2911, 2915, 2918, 2927, 2929, 2929, 2946, 2947, 2949, 2954, 2958, 2960, 2962, 2965, 2969, 2970, 2972, 2972, 2974, 2975, 2979, 2980, 2984, 2986, 2990, 3001, 3006, 3010, 3014, 3016, 3018, 3021, 3024, 3024, 3031, 3031, 3046, 3055, 3073, 3075, 3077, 3084, 3086, 3088, 3090, 3112, 3114, 3123, 3125, 3129, 3133, 3140, 3142, 3144, 3146, 3149, 3157, 3158, 3160, 3161, 3168, 3171, 3174, 3183, 3202, 3203, 3205, 3212, 3214, 3216, 3218, 3240, 3242, 3251, 3253, 3257, 3260, 3268, 3270, 3272, 3274, 3277, 3285, 3286, 3294, 3294, 3296, 3299, 3302, 3311, 3313, 3314, 3330, 3331, 3333, 3340, 3342, 3344, 3346, 3386, 3389, 3396, 3398, 3400, 3402, 3406, 3415, 3415, 3424, 3427, 3430, 3439, 3450, 3455, 3458, 3459, 3461, 3478, 3482, 3505, 3507, 3515, 3517, 3517, 3520, 3526, 3530, 3530, 3535, 3540, 3542, 3542, 3544, 3551, 3570, 3571, 3585, 3642, 3648, 3662, 3664, 3673, 3713, 3714, 3716, 3716, 3719, 3720, 3722, 3722, 3725, 3725, 3732, 3735, 3737, 3743, 3745, 3747, 3749, 3749, 3751, 3751, 3754, 3755, 3757, 3769, 3771, 3773, 3776, 3780, 3782, 3782, 3784, 3789, 3792, 3801, 3804, 3807, 3840, 3840, 3864, 3865, 3872, 3881, 3893, 3893, 3895, 3895, 3897, 3897, 3902, 3911, 3913, 3948, 3953, 3972, 3974, 3991, 3993, 4028, 4038, 4038, 4096, 4169, 4176, 4253, 4256, 4293, 4295, 4295, 4301, 4301, 4304, 4346, 4348, 4680, 4682, 4685, 4688, 4694, 4696, 4696, 4698, 4701, 4704, 4744, 4746, 4749, 4752, 4784, 4786, 4789, 4792, 4798, 4800, 4800, 4802, 4805, 4808, 4822, 4824, 4880, 4882, 4885, 4888, 4954, 4957, 4959, 4969, 4977, 4992, 5007, 5024, 5108, 5121, 5740, 5743, 5759, 5761, 5786, 5792, 5866, 5870, 5872, 5888, 5900, 5902, 5908, 5920, 5940, 5952, 5971, 5984, 5996, 5998, 6000, 6002, 6003, 6016, 6099, 6103, 6103, 6108, 6109, 6112, 6121, 6155, 6157, 6160, 6169, 6176, 6263, 6272, 6314, 6320, 6389, 6400, 6428, 6432, 6443, 6448, 6459, 6470, 6509, 6512, 6516, 6528, 6571, 6576, 6601, 6608, 6618, 6656, 6683, 6688, 6750, 6752, 6780, 6783, 6793, 6800, 6809, 6823, 6823, 6912, 6987, 6992, 7001, 7019, 7027, 7040, 7155, 7168, 7223, 7232, 7241, 7245, 7293, 7376, 7378, 7380, 7414, 7424, 7654, 7676, 7957, 7960, 7965, 7968, 8005, 8008, 8013, 8016, 8023, 8025, 8025, 8027, 8027, 8029, 8029, 8031, 8061, 8064, 8116, 8118, 8124, 8126, 8126, 8130, 8132, 8134, 8140, 8144, 8147, 8150, 8155, 8160, 8172, 8178, 8180, 8182, 8188, 8255, 8256, 8276, 8276, 8305, 8305, 8319, 8319, 8336, 8348, 8400, 8412, 8417, 8417, 8421, 8432, 8450, 8450, 8455, 8455, 8458, 8467, 8469, 8469, 8472, 8477, 8484, 8484, 8486, 8486, 8488, 8488, 8490, 8505, 8508, 8511, 8517, 8521, 8526, 8526, 8544, 8584, 11264, 11310, 11312, 11358, 11360, 11492, 11499, 11507, 11520, 11557, 11559, 11559, 11565, 11565, 11568, 11623, 11631, 11631, 11647, 11670, 11680, 11686, 11688, 11694, 11696, 11702, 11704, 11710, 11712, 11718, 11720, 11726, 11728, 11734, 11736, 11742, 11744, 11775, 12293, 12295, 12321, 12335, 12337, 12341, 12344, 12348, 12353, 12438, 12441, 12447, 12449, 12538, 12540, 12543, 12549, 12589, 12593, 12686, 12704, 12730, 12784, 12799, 13312, 19893, 19968, 40908, 40960, 42124, 42192, 42237, 42240, 42508, 42512, 42539, 42560, 42607, 42612, 42621, 42623, 42647, 42655, 42737, 42775, 42783, 42786, 42888, 42891, 42894, 42896, 42899, 42912, 42922, 43000, 43047, 43072, 43123, 43136, 43204, 43216, 43225, 43232, 43255, 43259, 43259, 43264, 43309, 43312, 43347, 43360, 43388, 43392, 43456, 43471, 43481, 43520, 43574, 43584, 43597, 43600, 43609, 43616, 43638, 43642, 43643, 43648, 43714, 43739, 43741, 43744, 43759, 43762, 43766, 43777, 43782, 43785, 43790, 43793, 43798, 43808, 43814, 43816, 43822, 43968, 44010, 44012, 44013, 44016, 44025, 44032, 55203, 55216, 55238, 55243, 55291, 63744, 64109, 64112, 64217, 64256, 64262, 64275, 64279, 64285, 64296, 64298, 64310, 64312, 64316, 64318, 64318, 64320, 64321, 64323, 64324, 64326, 64433, 64467, 64829, 64848, 64911, 64914, 64967, 65008, 65019, 65024, 65039, 65056, 65062, 65075, 65076, 65101, 65103, 65136, 65140, 65142, 65276, 65296, 65305, 65313, 65338, 65343, 65343, 65345, 65370, 65382, 65470, 65474, 65479, 65482, 65487, 65490, 65495, 65498, 65500, 2, 0, 0, 41, 43, 65535, 3, 0, 0, 38, 40, 91, 93, 65535, 3, 0, 0, 9, 11, 12, 14, 65535, 2, 0, 0, 46, 48, 65535, 1, 0, 29, 29, 1, 0, 28, 28, 1, 0, 13, 13, 17, 0, 36, 36, 162, 165, 1423, 1423, 1547, 1547, 2546, 2547, 2555, 2555, 2801, 2801, 3065, 3065, 3647, 3647, 6107, 6107, 8352, 8378, 43064, 43064, 65020, 65020, 65129, 65129, 65284, 65284, 65504, 65505, 65509, 65510, 1, 0, 32, 32, 6, 0, 95, 95, 8255, 8256, 8276, 8276, 65075, 65076, 65101, 65103, 65343, 65343, 1, 0, 9, 9, 3, 0, 0, 33, 35, 91, 93, 65535, 1, 0, 10, 10, 1, 0, 11, 11, 1, 0, 31, 31, 370, 0, 65, 90, 97, 122, 170, 170, 181, 181, 186, 186, 192, 214, 216, 246, 248, 705, 710, 721, 736, 740, 748, 748, 750, 750, 880, 884, 886, 887, 890, 893, 902, 902, 904, 906, 908, 908, 910, 929, 931, 1013, 1015, 1153, 1162, 1319, 1329, 1366, 1369, 1369, 1377, 1415, 1488, 1514, 1520, 1522, 1568, 1610, 1646, 1647, 1649, 1747, 1749, 1749, 1765, 1766, 1774, 1775, 1786, 1788, 1791, 1791, 1808, 1808, 1810, 1839, 1869, 1957, 1969, 1969, 1994, 2026, 2036, 2037, 2042, 2042, 2048, 2069, 2074, 2074, 2084, 2084, 2088, 2088, 2112, 2136, 2208, 2208, 2210, 2220, 2308, 2361, 2365, 2365, 2384, 2384, 2392, 2401, 2417, 2423, 2425, 2431, 2437, 2444, 2447, 2448, 2451, 2472, 2474, 2480, 2482, 2482, 2486, 2489, 2493, 2493, 2510, 2510, 2524, 2525, 2527, 2529, 2544, 2545, 2565, 2570, 2575, 2576, 2579, 2600, 2602, 2608, 2610, 2611, 2613, 2614, 2616, 2617, 2649, 2652, 2654, 2654, 2674, 2676, 2693, 2701, 2703, 2705, 2707, 2728, 2730, 2736, 2738, 2739, 2741, 2745, 2749, 2749, 2768, 2768, 2784, 2785, 2821, 2828, 2831, 2832, 2835, 2856, 2858, 2864, 2866, 2867, 2869, 2873, 2877, 2877, 2908, 2909, 2911, 2913, 2929, 2929, 2947, 2947, 2949, 2954, 2958, 2960, 2962, 2965, 2969, 2970, 2972, 2972, 2974, 2975, 2979, 2980, 2984, 2986, 2990, 3001, 3024, 3024, 3077, 3084, 3086, 3088, 3090, 3112, 3114, 3123, 3125, 3129, 3133, 3133, 3160, 3161, 3168, 3169, 3205, 3212, 3214, 3216, 3218, 3240, 3242, 3251, 3253, 3257, 3261, 3261, 3294, 3294, 3296, 3297, 3313, 3314, 3333, 3340, 3342, 3344, 3346, 3386, 3389, 3389, 3406, 3406, 3424, 3425, 3450, 3455, 3461, 3478, 3482, 3505, 3507, 3515, 3517, 3517, 3520, 3526, 3585, 3632, 3634, 3635, 3648, 3654, 3713, 3714, 3716, 3716, 3719, 3720, 3722, 3722, 3725, 3725, 3732, 3735, 3737, 3743, 3745, 3747, 3749, 3749, 3751, 3751, 3754, 3755, 3757, 3760, 3762, 3763, 3773, 3773, 3776, 3780, 3782, 3782, 3804, 3807, 3840, 3840, 3904, 3911, 3913, 3948, 3976, 3980, 4096, 4138, 4159, 4159, 4176, 4181, 4186, 4189, 4193, 4193, 4197, 4198, 4206, 4208, 4213, 4225, 4238, 4238, 4256, 4293, 4295, 4295, 4301, 4301, 4304, 4346, 4348, 4680, 4682, 4685, 4688, 4694, 4696, 4696, 4698, 4701, 4704, 4744, 4746, 4749, 4752, 4784, 4786, 4789, 4792, 4798, 4800, 4800, 4802, 4805, 4808, 4822, 4824, 4880, 4882, 4885, 4888, 4954, 4992, 5007, 5024, 5108, 5121, 5740, 5743, 5759, 5761, 5786, 5792, 5866, 5870, 5872, 5888, 5900, 5902, 5905, 5920, 5937, 5952, 5969, 5984, 5996, 5998, 6000, 6016, 6067, 6103, 6103, 6108, 6108, 6176, 6263, 6272, 6312, 6314, 6314, 6320, 6389, 6400, 6428, 6480, 6509, 6512, 6516, 6528, 6571, 6593, 6599, 6656, 6678, 6688, 6740, 6823, 6823, 6917, 6963, 6981, 6987, 7043, 7072, 7086, 7087, 7098, 7141, 7168, 7203, 7245, 7247, 7258, 7293, 7401, 7404, 7406, 7409, 7413, 7414, 7424, 7615, 7680, 7957, 7960, 7965, 7968, 8005, 8008, 8013, 8016, 8023, 8025, 8025, 8027, 8027, 8029, 8029, 8031, 8061, 8064, 8116, 8118, 8124, 8126, 8126, 8130, 8132, 8134, 8140, 8144, 8147, 8150, 8155, 8160, 8172, 8178, 8180, 8182, 8188, 8305, 8305, 8319, 8319, 8336, 8348, 8450, 8450, 8455, 8455, 8458, 8467, 8469, 8469, 8472, 8477, 8484, 8484, 8486, 8486, 8488, 8488, 8490, 8505, 8508, 8511, 8517, 8521, 8526, 8526, 8544, 8584, 11264, 11310, 11312, 11358, 11360, 11492, 11499, 11502, 11506, 11507, 11520, 11557, 11559, 11559, 11565, 11565, 11568, 11623, 11631, 11631, 11648, 11670, 11680, 11686, 11688, 11694, 11696, 11702, 11704, 11710, 11712, 11718, 11720, 11726, 11728, 11734, 11736, 11742, 12293, 12295, 12321, 12329, 12337, 12341, 12344, 12348, 12353, 12438, 12443, 12447, 12449, 12538, 12540, 12543, 12549, 12589, 12593, 12686, 12704, 12730, 12784, 12799, 13312, 19893, 19968, 40908, 40960, 42124, 42192, 42237, 42240, 42508, 42512, 42527, 42538, 42539, 42560, 42606, 42623, 42647, 42656, 42735, 42775, 42783, 42786, 42888, 42891, 42894, 42896, 42899, 42912, 42922, 43000, 43009, 43011, 43013, 43015, 43018, 43020, 43042, 43072, 43123, 43138, 43187, 43250, 43255, 43259, 43259, 43274, 43301, 43312, 43334, 43360, 43388, 43396, 43442, 43471, 43471, 43520, 43560, 43584, 43586, 43588, 43595, 43616, 43638, 43642, 43642, 43648, 43695, 43697, 43697, 43701, 43702, 43705, 43709, 43712, 43712, 43714, 43714, 43739, 43741, 43744, 43754, 43762, 43764, 43777, 43782, 43785, 43790, 43793, 43798, 43808, 43814, 43816, 43822, 43968, 44002, 44032, 55203, 55216, 55238, 55243, 55291, 63744, 64109, 64112, 64217, 64256, 64262, 64275, 64279, 64285, 64285, 64287, 64296, 64298, 64310, 64312, 64316, 64318, 64318, 64320, 64321, 64323, 64324, 64326, 64433, 64467, 64829, 64848, 64911, 64914, 64967, 65008, 65019, 65136, 65140, 65142, 65276, 65313, 65338, 65345, 65370, 65382, 65470, 65474, 65479, 65482, 65487, 65490, 65495, 65498, 65500, 1070, 0, 1, 1, 0, 0, 0, 0, 3, 1, 0, 0, 0, 0, 5, 1, 0, 0, 0, 0, 7, 1, 0, 0, 0, 0, 9, 1, 0, 0, 0, 0, 11, 1, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 15, 1, 0, 0, 0, 0, 17, 1, 0, 0, 0, 0, 19, 1, 0, 0, 0, 0, 21, 1, 0, 0, 0, 0, 23, 1, 0, 0, 0, 0, 25, 1, 0, 0, 0, 0, 27, 1, 0, 0, 0, 0, 29, 1, 0, 0, 0, 0, 31, 1, 0, 0, 0, 0, 33, 1, 0, 0, 0, 0, 35, 1, 0, 0, 0, 0, 37, 1, 0, 0, 0, 0, 39, 1, 0, 0, 0, 0, 41, 1, 0, 0, 0, 0, 43, 1, 0, 0, 0, 0, 45, 1, 0, 0, 0, 0, 47, 1, 0, 0, 0, 0, 49, 1, 0, 0, 0, 0, 51, 1, 0, 0, 0, 0, 53, 1, 0, 0, 0, 0, 55, 1, 0, 0, 0, 0, 57, 1, 0, 0, 0, 0, 59, 1, 0, 0, 0, 0, 61, 1, 0, 0, 0, 0, 63, 1, 0, 0, 0, 0, 65, 1, 0, 0, 0, 0, 67, 1, 0, 0, 0, 0, 69, 1, 0, 0, 0, 0, 71, 1, 0, 0, 0, 0, 73, 1, 0, 0, 0, 0, 75, 1, 0, 0, 0, 0, 77, 1, 0, 0, 0, 0, 79, 1, 0, 0, 0, 0, 81, 1, 0, 0, 0, 0, 83, 1, 0, 0, 0, 0, 85, 1, 0, 0, 0, 0, 87, 1, 0, 0, 0, 0, 89, 1, 0, 0, 0, 0, 91, 1, 0, 0, 0, 0, 93, 1, 0, 0, 0, 0, 95, 1, 0, 0, 0, 0, 97, 1, 0, 0, 0, 0, 99, 1, 0, 0, 0, 0, 101, 1, 0, 0, 0, 0, 103, 1, 0, 0, 0, 0, 105, 1, 0, 0, 0, 0, 107, 1, 0, 0, 0, 0, 109, 1, 0, 0, 0, 0, 111, 1, 0, 0, 0, 0, 113, 1, 0, 0, 0, 0, 115, 1, 0, 0, 0, 0, 117, 1, 0, 0, 0, 0, 119, 1, 0, 0, 0, 0, 121, 1, 0, 0, 0, 0, 123, 1, 0, 0, 0, 0, 125, 1, 0, 0, 0, 0, 127, 1, 0, 0, 0, 0, 129, 1, 0, 0, 0, 0, 131, 1, 0, 0, 0, 0, 133, 1, 0, 0, 0, 0, 135, 1, 0, 0, 0, 0, 137, 1, 0, 0, 0, 0, 139, 1, 0, 0, 0, 0, 141, 1, 0, 0, 0, 0, 143, 1, 0, 0, 0, 0, 145, 1, 0, 0, 0, 0, 147, 1, 0, 0, 0, 0, 149, 1, 0, 0, 0, 0, 151, 1, 0, 0, 0, 0, 153, 1, 0, 0, 0, 0, 155, 1, 0, 0, 0, 0, 157, 1, 0, 0, 0, 0, 159, 1, 0, 0, 0, 0, 161, 1, 0, 0, 0, 0, 163, 1, 0, 0, 0, 0, 165, 1, 0, 0, 0, 0, 167, 1, 0, 0, 0, 0, 169, 1, 0, 0, 0, 0, 171, 1, 0, 0, 0, 0, 173, 1, 0, 0, 0, 0, 175, 1, 0, 0, 0, 0, 177, 1, 0, 0, 0, 0, 179, 1, 0, 0, 0, 0, 181, 1, 0, 0, 0, 0, 183, 1, 0, 0, 0, 0, 185, 1, 0, 0, 0, 0, 187, 1, 0, 0, 0, 0, 189, 1, 0, 0, 0, 0, 191, 1, 0, 0, 0, 0, 193, 1, 0, 0, 0, 0, 195, 1, 0, 0, 0, 0, 197, 1, 0, 0, 0, 0, 199, 1, 0, 0, 0, 0, 201, 1, 0, 0, 0, 0, 203, 1, 0, 0, 0, 0, 205, 1, 0, 0, 0, 0, 207, 1, 0, 0, 0, 0, 209, 1, 0, 0, 0, 0, 211, 1, 0, 0, 0, 0, 213, 1, 0, 0, 0, 0, 215, 1, 0, 0, 0, 0, 217, 1, 0, 0, 0, 0, 219, 1, 0, 0, 0, 0, 221, 1, 0, 0, 0, 0, 223, 1, 0, 0, 0, 0, 225, 1, 0, 0, 0, 0, 227, 1, 0, 0, 0, 0, 229, 1, 0, 0, 0, 0, 231, 1, 0, 0, 0, 0, 233, 1, 0, 0, 0, 0, 235, 1, 0, 0, 0, 0, 237, 1, 0, 0, 0, 0, 239, 1, 0, 0, 0, 0, 241, 1, 0, 0, 0, 0, 243, 1, 0, 0, 0, 0, 245, 1, 0, 0, 0, 0, 247, 1, 0, 0, 0, 0, 249, 1, 0, 0, 0, 0, 251, 1, 0, 0, 0, 0, 253, 1, 0, 0, 0, 0, 255, 1, 0, 0, 0, 0, 257, 1, 0, 0, 0, 0, 259, 1, 0, 0, 0, 0, 261, 1, 0, 0, 0, 0, 263, 1, 0, 0, 0, 0, 265, 1, 0, 0, 0, 1, 307, 1, 0, 0, 0, 3, 309, 1, 0, 0, 0, 5, 311, 1, 0, 0, 0, 7, 313, 1, 0, 0, 0, 9, 316, 1, 0, 0, 0, 11, 318, 1, 0, 0, 0, 13, 320, 1, 0, 0, 0, 15, 322, 1, 0, 0, 0, 17, 324, 1, 0, 0, 0, 19, 326, 1, 0, 0, 0, 21, 328, 1, 0, 0, 0, 23, 330, 1, 0, 0, 0, 25, 333, 1, 0, 0, 0, 27, 335, 1, 0, 0, 0, 29, 337, 1, 0, 0, 0, 31, 339, 1, 0, 0, 0, 33, 341, 1, 0, 0, 0, 35, 343, 1, 0, 0, 0, 37, 346, 1, 0, 0, 0, 39, 348, 1, 0, 0, 0, 41, 350, 1, 0, 0, 0, 43, 353, 1, 0, 0, 0, 45, 356, 1, 0, 0, 0, 47, 358, 1, 0, 0, 0, 49, 360, 1, 0, 0, 0, 51, 362, 1, 0, 0, 0, 53, 364, 1, 0, 0, 0, 55, 366, 1, 0, 0, 0, 57, 368, 1, 0, 0, 0, 59, 370, 1, 0, 0, 0, 61, 372, 1, 0, 0, 0, 63, 374, 1, 0, 0, 0, 65, 376, 1, 0, 0, 0, 67, 378, 1, 0, 0, 0, 69, 380, 1, 0, 0, 0, 71, 382, 1, 0, 0, 0, 73, 384, 1, 0, 0, 0, 75, 386, 1, 0, 0, 0, 77, 388, 1, 0, 0, 0, 79, 390, 1, 0, 0, 0, 81, 392, 1, 0, 0, 0, 83, 394, 1, 0, 0, 0, 85, 396, 1, 0, 0, 0, 87, 398, 1, 0, 0, 0, 89, 400, 1, 0, 0, 0, 91, 402, 1, 0, 0, 0, 93, 410, 1, 0, 0, 0, 95, 418, 1, 0, 0, 0, 97, 424, 1, 0, 0, 0, 99, 428, 1, 0, 0, 0, 101, 437, 1, 0, 0, 0, 103, 443, 1, 0, 0, 0, 105, 450, 1, 0, 0, 0, 107, 453, 1, 0, 0, 0, 109, 459, 1, 0, 0, 0, 111, 462, 1, 0, 0, 0, 113, 469, 1, 0, 0, 0, 115, 473, 1, 0, 0, 0, 117, 480, 1, 0, 0, 0, 119, 487, 1, 0, 0, 0, 121, 494, 1, 0, 0, 0, 123, 499, 1, 0, 0, 0, 125, 505, 1, 0, 0, 0, 127, 510, 1, 0, 0, 0, 129, 519, 1, 0, 0, 0, 131, 526, 1, 0, 0, 0, 133, 532, 1, 0, 0, 0, 135, 535, 1, 0, 0, 0, 137, 540, 1, 0, 0, 0, 139, 546, 1, 0, 0, 0, 141, 556, 1, 0, 0, 0, 143, 560, 1, 0, 0, 0, 145, 571, 1, 0, 0, 0, 147, 576, 1, 0, 0, 0, 149, 582, 1, 0, 0, 0, 151, 587, 1, 0, 0, 0, 153, 593, 1, 0, 0, 0, 155, 599, 1, 0, 0, 0, 157, 602, 1, 0, 0, 0, 159, 606, 1, 0, 0, 0, 161, 610, 1, 0, 0, 0, 163, 614, 1, 0, 0, 0, 165, 617, 1, 0, 0, 0, 167, 624, 1, 0, 0, 0, 169, 629, 1, 0, 0, 0, 171, 638, 1, 0, 0, 0, 173, 645, 1, 0, 0, 0, 175, 648, 1, 0, 0, 0, 177, 653, 1, 0, 0, 0, 179, 659, 1, 0, 0, 0, 181, 663, 1, 0, 0, 0, 183, 668, 1, 0, 0, 0, 185, 675, 1, 0, 0, 0, 187, 680, 1, 0, 0, 0, 189, 686, 1, 0, 0, 0, 191, 693, 1, 0, 0, 0, 193, 698, 1, 0, 0, 0, 195, 703, 1, 0, 0, 0, 197, 707, 1, 0, 0, 0, 199, 712, 1, 0, 0, 0, 201, 735, 1, 0, 0, 0, 203, 737, 1, 0, 0, 0, 205, 757, 1, 0, 0, 0, 207, 773, 1, 0, 0, 0, 209, 775, 1, 0, 0, 0, 211, 782, 1, 0, 0, 0, 213, 786, 1, 0, 0, 0, 215, 790, 1, 0, 0, 0, 217, 794, 1, 0, 0, 0, 219, 796, 1, 0, 0, 0, 221, 800, 1, 0, 0, 0, 223, 802, 1, 0, 0, 0, 225, 826, 1, 0, 0, 0, 227, 840, 1, 0, 0, 0, 229, 849, 1, 0, 0, 0, 231, 856, 1, 0, 0, 0, 233, 864, 1, 0, 0, 0, 235, 871, 1, 0, 0, 0, 237, 882, 1, 0, 0, 0, 239, 885, 1, 0, 0, 0, 241, 889, 1, 0, 0, 0, 243, 897, 1, 0, 0, 0, 245, 904, 1, 0, 0, 0, 247, 914, 1, 0, 0, 0, 249, 921, 1, 0, 0, 0, 251, 924, 1, 0, 0, 0, 253, 928, 1, 0, 0, 0, 255, 935, 1, 0, 0, 0, 257, 939, 1, 0, 0, 0, 259, 949, 1, 0, 0, 0, 261, 954, 1, 0, 0, 0, 263, 970, 1, 0, 0, 0, 265, 1001, 1, 0, 0, 0, 267, 1003, 1, 0, 0, 0, 269, 1005, 1, 0, 0, 0, 271, 1007, 1, 0, 0, 0, 273, 1009, 1, 0, 0, 0, 275, 1011, 1, 0, 0, 0, 277, 1013, 1, 0, 0, 0, 279, 1015, 1, 0, 0, 0, 281, 1017, 1, 0, 0, 0, 283, 1019, 1, 0, 0, 0, 285, 1021, 1, 0, 0, 0, 287, 1023, 1, 0, 0, 0, 289, 1025, 1, 0, 0, 0, 291, 1027, 1, 0, 0, 0, 293, 1029, 1, 0, 0, 0, 295, 1031, 1, 0, 0, 0, 297, 1033, 1, 0, 0, 0, 299, 1035, 1, 0, 0, 0, 301, 1037, 1, 0, 0, 0, 303, 1039, 1, 0, 0, 0, 305, 1041, 1, 0, 0, 0, 307, 308, 5, 59, 0, 0, 308, 2, 1, 0, 0, 0, 309, 310, 5, 44, 0, 0, 310, 4, 1, 0, 0, 0, 311, 312, 5, 61, 0, 0, 312, 6, 1, 0, 0, 0, 313, 314, 5, 43, 0, 0, 314, 315, 5, 61, 0, 0, 315, 8, 1, 0, 0, 0, 316, 317, 5, 42, 0, 0, 317, 10, 1, 0, 0, 0, 318, 319, 5, 40, 0, 0, 319, 12, 1, 0, 0, 0, 320, 321, 5, 41, 0, 0, 321, 14, 1, 0, 0, 0, 322, 323, 5, 91, 0, 0, 323, 16, 1, 0, 0, 0, 324, 325, 5, 93, 0, 0, 325, 18, 1, 0, 0, 0, 326, 327, 5, 58, 0, 0, 327, 20, 1, 0, 0, 0, 328, 329, 5, 124, 0, 0, 329, 22, 1, 0, 0, 0, 330, 331, 5, 46, 0, 0, 331, 332, 5, 46, 0, 0, 332, 24, 1, 0, 0, 0, 333, 334, 5, 43, 0, 0, 334, 26, 1, 0, 0, 0, 335, 336, 5, 45, 0, 0, 336, 28, 1, 0, 0, 0, 337, 338, 5, 47, 0, 0, 338, 30, 1, 0, 0, 0, 339, 340, 5, 37, 0, 0, 340, 32, 1, 0, 0, 0, 341, 342, 5, 94, 0, 0, 342, 34, 1, 0, 0, 0, 343, 344, 5, 60, 0, 0, 344, 345, 5, 62, 0, 0, 345, 36, 1, 0, 0, 0, 346, 347, 5, 60, 0, 0, 347, 38, 1, 0, 0, 0, 348, 349, 5, 62, 0, 0, 349, 40, 1, 0, 0, 0, 350, 351, 5, 60, 0, 0, 351, 352, 5, 61, 0, 0, 352, 42, 1, 0, 0, 0, 353, 354, 5, 62, 0, 0, 354, 355, 5, 61, 0, 0, 355, 44, 1, 0, 0, 0, 356, 357, 5, 46, 0, 0, 357, 46, 1, 0, 0, 0, 358, 359, 5, 123, 0, 0, 359, 48, 1, 0, 0, 0, 360, 361, 5, 125, 0, 0, 361, 50, 1, 0, 0, 0, 362, 363, 5, 36, 0, 0, 363, 52, 1, 0, 0, 0, 364, 365, 5, 10216, 0, 0, 365, 54, 1, 0, 0, 0, 366, 367, 5, 12296, 0, 0, 367, 56, 1, 0, 0, 0, 368, 369, 5, 65124, 0, 0, 369, 58, 1, 0, 0, 0, 370, 371, 5, 65308, 0, 0, 371, 60, 1, 0, 0, 0, 372, 373, 5, 10217, 0, 0, 373, 62, 1, 0, 0, 0, 374, 375, 5, 12297, 0, 0, 375, 64, 1, 0, 0, 0, 376, 377, 5, 65125, 0, 0, 377, 66, 1, 0, 0, 0, 378, 379, 5, 65310, 0, 0, 379, 68, 1, 0, 0, 0, 380, 381, 5, 173, 0, 0, 381, 70, 1, 0, 0, 0, 382, 383, 5, 8208, 0, 0, 383, 72, 1, 0, 0, 0, 384, 385, 5, 8209, 0, 0, 385, 74, 1, 0, 0, 0, 386, 387, 5, 8210, 0, 0, 387, 76, 1, 0, 0, 0, 388, 389, 5, 8211, 0, 0, 389, 78, 1, 0, 0, 0, 390, 391, 5, 8212, 0, 0, 391, 80, 1, 0, 0, 0, 392, 393, 5, 8213, 0, 0, 393, 82, 1, 0, 0, 0, 394, 395, 5, 8722, 0, 0, 395, 84, 1, 0, 0, 0, 396, 397, 5, 65112, 0, 0, 397, 86, 1, 0, 0, 0, 398, 399, 5, 65123, 0, 0, 399, 88, 1, 0, 0, 0, 400, 401, 5, 65293, 0, 0, 401, 90, 1, 0, 0, 0, 402, 403, 7, 0, 0, 0, 403, 404, 7, 1, 0, 0, 404, 405, 7, 2, 0, 0, 405, 406, 7, 3, 0, 0, 406, 407, 7, 4, 0, 0, 407, 408, 7, 5, 0, 0, 408, 409, 7, 6, 0, 0, 409, 92, 1, 0, 0, 0, 410, 411, 7, 2, 0, 0, 411, 412, 7, 7, 0, 0, 412, 413, 7, 8, 0, 0, 413, 414, 7, 9, 0, 0, 414, 415, 7, 5, 0, 0, 415, 416, 7, 3, 0, 0, 416, 417, 7, 0, 0, 0, 417, 94, 1, 0, 0, 0, 418, 419, 7, 10, 0, 0, 419, 420, 7, 6, 0, 0, 420, 421, 7, 5, 0, 0, 421, 422, 7, 8, 0, 0, 422, 423, 7, 6, 0, 0, 423, 96, 1, 0, 0, 0, 424, 425, 7, 4, 0, 0, 425, 426, 7, 3, 0, 0, 426, 427, 7, 3, 0, 0, 427, 98, 1, 0, 0, 0, 428, 429, 7, 8, 0, 0, 429, 430, 7, 2, 0, 0, 430, 431, 7, 11, 0, 0, 431, 432, 7, 5, 0, 0, 432, 433, 7, 8, 0, 0, 433, 434, 7, 6, 0, 0, 434, 435, 7, 4, 0, 0, 435, 436, 7, 3, 0, 0, 436, 100, 1, 0, 0, 0, 437, 438, 7, 12, 0, 0, 438, 439, 7, 4, 0, 0, 439, 440, 7, 11, 0, 0, 440, 441, 7, 13, 0, 0, 441, 442, 7, 14, 0, 0, 442, 102, 1, 0, 0, 0, 443, 444, 7, 10, 0, 0, 444, 445, 7, 6, 0, 0, 445, 446, 7, 15, 0, 0, 446, 447, 7, 5, 0, 0, 447, 448, 7, 6, 0, 0, 448, 449, 7, 16, 0, 0, 449, 104, 1, 0, 0, 0, 450, 451, 7, 4, 0, 0, 451, 452, 7, 17, 0, 0, 452, 106, 1, 0, 0, 0, 453, 454, 7, 12, 0, 0, 454, 455, 7, 0, 0, 0, 455, 456, 7, 7, 0, 0, 456, 457, 7, 18, 0, 0, 457, 458, 7, 0, 0, 0, 458, 108, 1, 0, 0, 0, 459, 460, 7, 8, 0, 0, 460, 461, 7, 6, 0, 0, 461, 110, 1, 0, 0, 0, 462, 463, 7, 13, 0, 0, 463, 464, 7, 7, 0, 0, 464, 465, 7, 0, 0, 0, 465, 466, 7, 4, 0, 0, 466, 467, 7, 11, 0, 0, 467, 468, 7, 0, 0, 0, 468, 112, 1, 0, 0, 0, 469, 470, 7, 17, 0, 0, 470, 471, 7, 0, 0, 0, 471, 472, 7, 11, 0, 0, 472, 114, 1, 0, 0, 0, 473, 474, 7, 16, 0, 0, 474, 475, 7, 0, 0, 0, 475, 476, 7, 11, 0, 0, 476, 477, 7, 4, 0, 0, 477, 478, 7, 13, 0, 0, 478, 479, 7, 14, 0, 0, 479, 116, 1, 0, 0, 0, 480, 481, 7, 16, 0, 0, 481, 482, 7, 0, 0, 0, 482, 483, 7, 3, 0, 0, 483, 484, 7, 0, 0, 0, 484, 485, 7, 11, 0, 0, 485, 486, 7, 0, 0, 0, 486, 118, 1, 0, 0, 0, 487, 488, 7, 7, 0, 0, 488, 489, 7, 0, 0, 0, 489, 490, 7, 12, 0, 0, 490, 491, 7, 8, 0, 0, 491, 492, 7, 19, 0, 0, 492, 493, 7, 0, 0, 0, 493, 120, 1, 0, 0, 0, 494, 495, 7, 13, 0, 0, 495, 496, 7, 4, 0, 0, 496, 497, 7, 3, 0, 0, 497, 498, 7, 3, 0, 0, 498, 122, 1, 0, 0, 0, 499, 500, 7, 20, 0, 0, 500, 501, 7, 5, 0, 0, 501, 502, 7, 0, 0, 0, 502, 503, 7, 3, 0, 0, 503, 504, 7, 16, 0, 0, 504, 124, 1, 0, 0, 0, 505, 506, 7, 15, 0, 0, 506, 507, 7, 5, 0, 0, 507, 508, 7, 11, 0, 0, 508, 509, 7, 14, 0, 0, 509, 126, 1, 0, 0, 0, 510, 511, 7, 16, 0, 0, 511, 512, 7, 5, 0, 0, 512, 513, 7, 17, 0, 0, 513, 514, 7, 11, 0, 0, 514, 515, 7, 5, 0, 0, 515, 516, 7, 6, 0, 0, 516, 517, 7, 13, 0, 0, 517, 518, 7, 11, 0, 0, 518, 128, 1, 0, 0, 0, 519, 520, 7, 7, 0, 0, 520, 521, 7, 0, 0, 0, 521, 522, 7, 11, 0, 0, 522, 523, 7, 10, 0, 0, 523, 524, 7, 7, 0, 0, 524, 525, 7, 6, 0, 0, 525, 130, 1, 0, 0, 0, 526, 527, 7, 8, 0, 0, 527, 528, 7, 7, 0, 0, 528, 529, 7, 16, 0, 0, 529, 530, 7, 0, 0, 0, 530, 531, 7, 7, 0, 0, 531, 132, 1, 0, 0, 0, 532, 533, 7, 21, 0, 0, 533, 534, 7, 20, 0, 0, 534, 134, 1, 0, 0, 0, 535, 536, 7, 17, 0, 0, 536, 537, 7, 22, 0, 0, 537, 538, 7, 5, 0, 0, 538, 539, 7, 2, 0, 0, 539, 136, 1, 0, 0, 0, 540, 541, 7, 3, 0, 0, 541, 542, 7, 5, 0, 0, 542, 543, 7, 12, 0, 0, 543, 544, 7, 5, 0, 0, 544, 545, 7, 11, 0, 0, 545, 138, 1, 0, 0, 0, 546, 547, 7, 4, 0, 0, 547, 548, 7, 17, 0, 0, 548, 549, 7, 13, 0, 0, 549, 550, 7, 0, 0, 0, 550, 551, 7, 6, 0, 0, 551, 552, 7, 16, 0, 0, 552, 553, 7, 5, 0, 0, 553, 554, 7, 6, 0, 0, 554, 555, 7, 18, 0, 0, 555, 140, 1, 0, 0, 0, 556, 557, 7, 4, 0, 0, 557, 558, 7, 17, 0, 0, 558, 559, 7, 13, 0, 0, 559, 142, 1, 0, 0, 0, 560, 561, 7, 16, 0, 0, 561, 562, 7, 0, 0, 0, 562, 563, 7, 17, 0, 0, 563, 564, 7, 13, 0, 0, 564, 565, 7, 0, 0, 0, 565, 566, 7, 6, 0, 0, 566, 567, 7, 16, 0, 0, 567, 568, 7, 5, 0, 0, 568, 569, 7, 6, 0, 0, 569, 570, 7, 18, 0, 0, 570, 144, 1, 0, 0, 0, 571, 572, 7, 16, 0, 0, 572, 573, 7, 0, 0, 0, 573, 574, 7, 17, 0, 0, 574, 575, 7, 13, 0, 0, 575, 146, 1, 0, 0, 0, 576, 577, 7, 10, 0, 0, 577, 578, 7, 17, 0, 0, 578, 579, 7, 5, 0, 0, 579, 580, 7, 6, 0, 0, 580, 581, 7, 18, 0, 0, 581, 148, 1, 0, 0, 0, 582, 583, 7, 23, 0, 0, 583, 584, 7, 8, 0, 0, 584, 585, 7, 5, 0, 0, 585, 586, 7, 6, 0, 0, 586, 150, 1, 0, 0, 0, 587, 588, 7, 17, 0, 0, 588, 589, 7, 11, 0, 0, 589, 590, 7, 4, 0, 0, 590, 591, 7, 7, 0, 0, 591, 592, 7, 11, 0, 0, 592, 152, 1, 0, 0, 0, 593, 594, 7, 15, 0, 0, 594, 595, 7, 14, 0, 0, 595, 596, 7, 0, 0, 0, 596, 597, 7, 7, 0, 0, 597, 598, 7, 0, 0, 0, 598, 154, 1, 0, 0, 0, 599, 600, 7, 8, 0, 0, 600, 601, 7, 7, 0, 0, 601, 156, 1, 0, 0, 0, 602, 603, 7, 1, 0, 0, 603, 604, 7, 8, 0, 0, 604, 605, 7, 7, 0, 0, 605, 158, 1, 0, 0, 0, 606, 607, 7, 4, 0, 0, 607, 608, 7, 6, 0, 0, 608, 609, 7, 16, 0, 0, 609, 160, 1, 0, 0, 0, 610, 611, 7, 6, 0, 0, 611, 612, 7, 8, 0, 0, 612, 613, 7, 11, 0, 0, 613, 162, 1, 0, 0, 0, 614, 615, 7, 5, 0, 0, 615, 616, 7, 6, 0, 0, 616, 164, 1, 0, 0, 0, 617, 618, 7, 17, 0, 0, 618, 619, 7, 11, 0, 0, 619, 620, 7, 4, 0, 0, 620, 621, 7, 7, 0, 0, 621, 622, 7, 11, 0, 0, 622, 623, 7, 17, 0, 0, 623, 166, 1, 0, 0, 0, 624, 625, 7, 0, 0, 0, 625, 626, 7, 6, 0, 0, 626, 627, 7, 16, 0, 0, 627, 628, 7, 17, 0, 0, 628, 168, 1, 0, 0, 0, 629, 630, 7, 13, 0, 0, 630, 631, 7, 8, 0, 0, 631, 632, 7, 6, 0, 0, 632, 633, 7, 11, 0, 0, 633, 634, 7, 4, 0, 0, 634, 635, 7, 5, 0, 0, 635, 636, 7, 6, 0, 0, 636, 637, 7, 17, 0, 0, 637, 170, 1, 0, 0, 0, 638, 639, 7, 7, 0, 0, 639, 640, 7, 0, 0, 0, 640, 641, 7, 18, 0, 0, 641, 642, 7, 0, 0, 0, 642, 643, 7, 1, 0, 0, 643, 644, 7, 2, 0, 0, 644, 172, 1, 0, 0, 0, 645, 646, 7, 5, 0, 0, 646, 647, 7, 17, 0, 0, 647, 174, 1, 0, 0, 0, 648, 649, 7, 6, 0, 0, 649, 650, 7, 10, 0, 0, 650, 651, 7, 3, 0, 0, 651, 652, 7, 3, 0, 0, 652, 176, 1, 0, 0, 0, 653, 654, 7, 13, 0, 0, 654, 655, 7, 8, 0, 0, 655, 656, 7, 10, 0, 0, 656, 657, 7, 6, 0, 0, 657, 658, 7, 11, 0, 0, 658, 178, 1, 0, 0, 0, 659, 660, 7, 4, 0, 0, 660, 661, 7, 6, 0, 0, 661, 662, 7, 20, 0, 0, 662, 180, 1, 0, 0, 0, 663, 664, 7, 6, 0, 0, 664, 665, 7, 8, 0, 0, 665, 666, 7, 6, 0, 0, 666, 667, 7, 0, 0, 0, 667, 182, 1, 0, 0, 0, 668, 669, 7, 17, 0, 0, 669, 670, 7, 5, 0, 0, 670, 671, 7, 6, 0, 0, 671, 672, 7, 18, 0, 0, 672, 673, 7, 3, 0, 0, 673, 674, 7, 0, 0, 0, 674, 184, 1, 0, 0, 0, 675, 676, 7, 11, 0, 0, 676, 677, 7, 7, 0, 0, 677, 678, 7, 10, 0, 0, 678, 679, 7, 0, 0, 0, 679, 186, 1, 0, 0, 0, 680, 681, 7, 9, 0, 0, 681, 682, 7, 4, 0, 0, 682, 683, 7, 3, 0, 0, 683, 684, 7, 17, 0, 0, 684, 685, 7, 0, 0, 0, 685, 188, 1, 0, 0, 0, 686, 687, 7, 0, 0, 0, 687, 688, 7, 1, 0, 0, 688, 689, 7, 5, 0, 0, 689, 690, 7, 17, 0, 0, 690, 691, 7, 11, 0, 0, 691, 692, 7, 17, 0, 0, 692, 190, 1, 0, 0, 0, 693, 694, 7, 13, 0, 0, 694, 695, 7, 4, 0, 0, 695, 696, 7, 17, 0, 0, 696, 697, 7, 0, 0, 0, 697, 192, 1, 0, 0, 0, 698, 699, 7, 0, 0, 0, 699, 700, 7, 3, 0, 0, 700, 701, 7, 17, 0, 0, 701, 702, 7, 0, 0, 0, 702, 194, 1, 0, 0, 0, 703, 704, 7, 0, 0, 0, 704, 705, 7, 6, 0, 0, 705, 706, 7, 16, 0, 0, 706, 196, 1, 0, 0, 0, 707, 708, 7, 15, 0, 0, 708, 709, 7, 14, 0, 0, 709, 710, 7, 0, 0, 0, 710, 711, 7, 6, 0, 0, 711, 198, 1, 0, 0, 0, 712, 713, 7, 11, 0, 0, 713, 714, 7, 14, 0, 0, 714, 715, 7, 0, 0, 0, 715, 716, 7, 6, 0, 0, 716, 200, 1, 0, 0, 0, 717, 722, 5, 34, 0, 0, 718, 721, 3, 297, 148, 0, 719, 721, 3, 203, 101, 0, 720, 718, 1, 0, 0, 0, 720, 719, 1, 0, 0, 0, 721, 724, 1, 0, 0, 0, 722, 720, 1, 0, 0, 0, 722, 723, 1, 0, 0, 0, 723, 725, 1, 0, 0, 0, 724, 722, 1, 0, 0, 0, 725, 736, 5, 34, 0, 0, 726, 731, 5, 39, 0, 0, 727, 730, 3, 277, 138, 0, 728, 730, 3, 203, 101, 0, 729, 727, 1, 0, 0, 0, 729, 728, 1, 0, 0, 0, 730, 733, 1, 0, 0, 0, 731, 729, 1, 0, 0, 0, 731, 732, 1, 0, 0, 0, 732, 734, 1, 0, 0, 0, 733, 731, 1, 0, 0, 0, 734, 736, 5, 39, 0, 0, 735, 717, 1, 0, 0, 0, 735, 726, 1, 0, 0, 0, 736, 202, 1, 0, 0, 0, 737, 755, 5, 92, 0, 0, 738, 756, 7, 24, 0, 0, 739, 740, 7, 10, 0, 0, 740, 741, 3, 213, 106, 0, 741, 742, 3, 213, 106, 0, 742, 743, 3, 213, 106, 0, 743, 744, 3, 213, 106, 0, 744, 756, 1, 0, 0, 0, 745, 746, 7, 10, 0, 0, 746, 747, 3, 213, 106, 0, 747, 748, 3, 213, 106, 0, 748, 749, 3, 213, 106, 0, 749, 750, 3, 213, 106, 0, 750, 751, 3, 213, 106, 0, 751, 752, 3, 213, 106, 0, 752, 753, 3, 213, 106, 0, 753, 754, 3, 213, 106, 0, 754, 756, 1, 0, 0, 0, 755, 738, 1, 0, 0, 0, 755, 739, 1, 0, 0, 0, 755, 745, 1, 0, 0, 0, 756, 204, 1, 0, 0, 0, 757, 758, 5, 48, 0, 0, 758, 759, 5, 120, 0, 0, 759, 761, 1, 0, 0, 0, 760, 762, 3, 213, 106, 0, 761, 760, 1, 0, 0, 0, 762, 763, 1, 0, 0, 0, 763, 761, 1, 0, 0, 0, 763, 764, 1, 0, 0, 0, 764, 206, 1, 0, 0, 0, 765, 774, 3, 223, 111, 0, 766, 770, 3, 217, 108, 0, 767, 769, 3, 215, 107, 0, 768, 767, 1, 0, 0, 0, 769, 772, 1, 0, 0, 0, 770, 768, 1, 0, 0, 0, 770, 771, 1, 0, 0, 0, 771, 774, 1, 0, 0, 0, 772, 770, 1, 0, 0, 0, 773, 765, 1, 0, 0, 0, 773, 766, 1, 0, 0, 0, 774, 208, 1, 0, 0, 0, 775, 777, 3, 223, 111, 0, 776, 778, 3, 221, 110, 0, 777, 776, 1, 0, 0, 0, 778, 779, 1, 0, 0, 0, 779, 777, 1, 0, 0, 0, 779, 780, 1, 0, 0, 0, 780, 210, 1, 0, 0, 0, 781, 783, 7, 25, 0, 0, 782, 781, 1, 0, 0, 0, 783, 212, 1, 0, 0, 0, 784, 787, 3, 215, 107, 0, 785, 787, 3, 211, 105, 0, 786, 784, 1, 0, 0, 0, 786, 785, 1, 0, 0, 0, 787, 214, 1, 0, 0, 0, 788, 791, 3, 223, 111, 0, 789, 791, 3, 217, 108, 0, 790, 788, 1, 0, 0, 0, 790, 789, 1, 0, 0, 0, 791, 216, 1, 0, 0, 0, 792, 795, 3, 219, 109, 0, 793, 795, 2, 56, 57, 0, 794, 792, 1, 0, 0, 0, 794, 793, 1, 0, 0, 0, 795, 218, 1, 0, 0, 0, 796, 797, 2, 49, 55, 0, 797, 220, 1, 0, 0, 0, 798, 801, 3, 223, 111, 0, 799, 801, 3, 219, 109, 0, 800, 798, 1, 0, 0, 0, 800, 799, 1, 0, 0, 0, 801, 222, 1, 0, 0, 0, 802, 803, 5, 48, 0, 0, 803, 224, 1, 0, 0, 0, 804, 806, 3, 215, 107, 0, 805, 804, 1, 0, 0, 0, 806, 807, 1, 0, 0, 0, 807, 805, 1, 0, 0, 0, 807, 808, 1, 0, 0, 0, 808, 827, 1, 0, 0, 0, 809, 811, 3, 215, 107, 0, 810, 809, 1, 0, 0, 0, 811, 812, 1, 0, 0, 0, 812, 810, 1, 0, 0, 0, 812, 813, 1, 0, 0, 0, 813, 814, 1, 0, 0, 0, 814, 816, 5, 46, 0, 0, 815, 817, 3, 215, 107, 0, 816, 815, 1, 0, 0, 0, 817, 818, 1, 0, 0, 0, 818, 816, 1, 0, 0, 0, 818, 819, 1, 0, 0, 0, 819, 827, 1, 0, 0, 0, 820, 822, 5, 46, 0, 0, 821, 823, 3, 215, 107, 0, 822, 821, 1, 0, 0, 0, 823, 824, 1, 0, 0, 0, 824, 822, 1, 0, 0, 0, 824, 825, 1, 0, 0, 0, 825, 827, 1, 0, 0, 0, 826, 805, 1, 0, 0, 0, 826, 810, 1, 0, 0, 0, 826, 820, 1, 0, 0, 0, 827, 828, 1, 0, 0, 0, 828, 830, 7, 0, 0, 0, 829, 831, 5, 45, 0, 0, 830, 829, 1, 0, 0, 0, 830, 831, 1, 0, 0, 0, 831, 833, 1, 0, 0, 0, 832, 834, 3, 215, 107, 0, 833, 832, 1, 0, 0, 0, 834, 835, 1, 0, 0, 0, 835, 833, 1, 0, 0, 0, 835, 836, 1, 0, 0, 0, 836, 226, 1, 0, 0, 0, 837, 839, 3, 215, 107, 0, 838, 837, 1, 0, 0, 0, 839, 842, 1, 0, 0, 0, 840, 838, 1, 0, 0, 0, 840, 841, 1, 0, 0, 0, 841, 843, 1, 0, 0, 0, 842, 840, 1, 0, 0, 0, 843, 845, 5, 46, 0, 0, 844, 846, 3, 215, 107, 0, 845, 844, 1, 0, 0, 0, 846, 847, 1, 0, 0, 0, 847, 845, 1, 0, 0, 0, 847, 848, 1, 0, 0, 0, 848, 228, 1, 0, 0, 0, 849, 850, 7, 9, 0, 0, 850, 851, 7, 5, 0, 0, 851, 852, 7, 3, 0, 0, 852, 853, 7, 11, 0, 0, 853, 854, 7, 0, 0, 0, 854, 855, 7, 7, 0, 0, 855, 230, 1, 0, 0, 0, 856, 857, 7, 0, 0, 0, 857, 858, 7, 1, 0, 0, 858, 859, 7, 11, 0, 0, 859, 860, 7, 7, 0, 0, 860, 861, 7, 4, 0, 0, 861, 862, 7, 13, 0, 0, 862, 863, 7, 11, 0, 0, 863, 232, 1, 0, 0, 0, 864, 868, 3, 255, 127, 0, 865, 867, 3, 257, 128, 0, 866, 865, 1, 0, 0, 0, 867, 870, 1, 0, 0, 0, 868, 866, 1, 0, 0, 0, 868, 869, 1, 0, 0, 0, 869, 234, 1, 0, 0, 0, 870, 868, 1, 0, 0, 0, 871, 872, 7, 13, 0, 0, 872, 873, 7, 8, 0, 0, 873, 874, 7, 6, 0, 0, 874, 875, 7, 17, 0, 0, 875, 876, 7, 11, 0, 0, 876, 877, 7, 7, 0, 0, 877, 878, 7, 4, 0, 0, 878, 879, 7, 5, 0, 0, 879, 880, 7, 6, 0, 0, 880, 881, 7, 11, 0, 0, 881, 236, 1, 0, 0, 0, 882, 883, 7, 16, 0, 0, 883, 884, 7, 8, 0, 0, 884, 238, 1, 0, 0, 0, 885, 886, 7, 9, 0, 0, 886, 887, 7, 8, 0, 0, 887, 888, 7, 7, 0, 0, 888, 240, 1, 0, 0, 0, 889, 890, 7, 7, 0, 0, 890, 891, 7, 0, 0, 0, 891, 892, 7, 26, 0, 0, 892, 893, 7, 10, 0, 0, 893, 894, 7, 5, 0, 0, 894, 895, 7, 7, 0, 0, 895, 896, 7, 0, 0, 0, 896, 242, 1, 0, 0, 0, 897, 898, 7, 10, 0, 0, 898, 899, 7, 6, 0, 0, 899, 900, 7, 5, 0, 0, 900, 901, 7, 26, 0, 0, 901, 902, 7, 10, 0, 0, 902, 903, 7, 0, 0, 0, 903, 244, 1, 0, 0, 0, 904, 905, 7, 12, 0, 0, 905, 906, 7, 4, 0, 0, 906, 907, 7, 6, 0, 0, 907, 908, 7, 16, 0, 0, 908, 909, 7, 4, 0, 0, 909, 910, 7, 11, 0, 0, 910, 911, 7, 8, 0, 0, 911, 912, 7, 7, 0, 0, 912, 913, 7, 20, 0, 0, 913, 246, 1, 0, 0, 0, 914, 915, 7, 17, 0, 0, 915, 916, 7, 13, 0, 0, 916, 917, 7, 4, 0, 0, 917, 918, 7, 3, 0, 0, 918, 919, 7, 4, 0, 0, 919, 920, 7, 7, 0, 0, 920, 248, 1, 0, 0, 0, 921, 922, 7, 8, 0, 0, 922, 923, 7, 9, 0, 0, 923, 250, 1, 0, 0, 0, 924, 925, 7, 4, 0, 0, 925, 926, 7, 16, 0, 0, 926, 927, 7, 16, 0, 0, 927, 252, 1, 0, 0, 0, 928, 929, 7, 16, 0, 0, 929, 930, 7, 7, 0, 0, 930, 931, 7, 8, 0, 0, 931, 932, 7, 2, 0, 0, 932, 254, 1, 0, 0, 0, 933, 936, 3, 305, 152, 0, 934, 936, 3, 293, 146, 0, 935, 933, 1, 0, 0, 0, 935, 934, 1, 0, 0, 0, 936, 256, 1, 0, 0, 0, 937, 940, 3, 273, 136, 0, 938, 940, 3, 289, 144, 0, 939, 937, 1, 0, 0, 0, 939, 938, 1, 0, 0, 0, 940, 258, 1, 0, 0, 0, 941, 945, 5, 96, 0, 0, 942, 944, 3, 269, 134, 0, 943, 942, 1, 0, 0, 0, 944, 947, 1, 0, 0, 0, 945, 943, 1, 0, 0, 0, 945, 946, 1, 0, 0, 0, 946, 948, 1, 0, 0, 0, 947, 945, 1, 0, 0, 0, 948, 950, 5, 96, 0, 0, 949, 941, 1, 0, 0, 0, 950, 951, 1, 0, 0, 0, 951, 949, 1, 0, 0, 0, 951, 952, 1, 0, 0, 0, 952, 260, 1, 0, 0, 0, 953, 955, 3, 263, 131, 0, 954, 953, 1, 0, 0, 0, 955, 956, 1, 0, 0, 0, 956, 954, 1, 0, 0, 0, 956, 957, 1, 0, 0, 0, 957, 262, 1, 0, 0, 0, 958, 971, 3, 291, 145, 0, 959, 971, 3, 295, 147, 0, 960, 971, 3, 299, 149, 0, 961, 971, 3, 301, 150, 0, 962, 971, 3, 267, 133, 0, 963, 971, 3, 287, 143, 0, 964, 971, 3, 285, 142, 0, 965, 971, 3, 283, 141, 0, 966, 971, 3, 271, 135, 0, 967, 971, 3, 303, 151, 0, 968, 971, 7, 27, 0, 0, 969, 971, 3, 265, 132, 0, 970, 958, 1, 0, 0, 0, 970, 959, 1, 0, 0, 0, 970, 960, 1, 0, 0, 0, 970, 961, 1, 0, 0, 0, 970, 962, 1, 0, 0, 0, 970, 963, 1, 0, 0, 0, 970, 964, 1, 0, 0, 0, 970, 965, 1, 0, 0, 0, 970, 966, 1, 0, 0, 0, 970, 967, 1, 0, 0, 0, 970, 968, 1, 0, 0, 0, 970, 969, 1, 0, 0, 0, 971, 264, 1, 0, 0, 0, 972, 973, 5, 47, 0, 0, 973, 974, 5, 42, 0, 0, 974, 980, 1, 0, 0, 0, 975, 979, 3, 275, 137, 0, 976, 977, 5, 42, 0, 0, 977, 979, 3, 281, 140, 0, 978, 975, 1, 0, 0, 0, 978, 976, 1, 0, 0, 0, 979, 982, 1, 0, 0, 0, 980, 978, 1, 0, 0, 0, 980, 981, 1, 0, 0, 0, 981, 983, 1, 0, 0, 0, 982, 980, 1, 0, 0, 0, 983, 984, 5, 42, 0, 0, 984, 1002, 5, 47, 0, 0, 985, 986, 5, 47, 0, 0, 986, 987, 5, 47, 0, 0, 987, 991, 1, 0, 0, 0, 988, 990, 3, 279, 139, 0, 989, 988, 1, 0, 0, 0, 990, 993, 1, 0, 0, 0, 991, 989, 1, 0, 0, 0, 991, 992, 1, 0, 0, 0, 992, 995, 1, 0, 0, 0, 993, 991, 1, 0, 0, 0, 994, 996, 3, 287, 143, 0, 995, 994, 1, 0, 0, 0, 995, 996, 1, 0, 0, 0, 996, 999, 1, 0, 0, 0, 997, 1000, 3, 299, 149, 0, 998, 1000, 5, 0, 0, 1, 999, 997, 1, 0, 0, 0, 999, 998, 1, 0, 0, 0, 1000, 1002, 1, 0, 0, 0, 1001, 972, 1, 0, 0, 0, 1001, 985, 1, 0, 0, 0, 1002, 266, 1, 0, 0, 0, 1003, 1004, 7, 28, 0, 0, 1004, 268, 1, 0, 0, 0, 1005, 1006, 7, 29, 0, 0, 1006, 270, 1, 0, 0, 0, 1007, 1008, 7, 30, 0, 0, 1008, 272, 1, 0, 0, 0, 1009, 1010, 7, 31, 0, 0, 1010, 274, 1, 0, 0, 0, 1011, 1012, 7, 32, 0, 0, 1012, 276, 1, 0, 0, 0, 1013, 1014, 7, 33, 0, 0, 1014, 278, 1, 0, 0, 0, 1015, 1016, 7, 34, 0, 0, 1016, 280, 1, 0, 0, 0, 1017, 1018, 7, 35, 0, 0, 1018, 282, 1, 0, 0, 0, 1019, 1020, 7, 36, 0, 0, 1020, 284, 1, 0, 0, 0, 1021, 1022, 7, 37, 0, 0, 1022, 286, 1, 0, 0, 0, 1023, 1024, 7, 38, 0, 0, 1024, 288, 1, 0, 0, 0, 1025, 1026, 7, 39, 0, 0, 1026, 290, 1, 0, 0, 0, 1027, 1028, 7, 40, 0, 0, 1028, 292, 1, 0, 0, 0, 1029, 1030, 7, 41, 0, 0, 1030, 294, 1, 0, 0, 0, 1031, 1032, 7, 42, 0, 0, 1032, 296, 1, 0, 0, 0, 1033, 1034, 7, 43, 0, 0, 1034, 298, 1, 0, 0, 0, 1035, 1036, 7, 44, 0, 0, 1036, 300, 1, 0, 0, 0, 1037, 1038, 7, 45, 0, 0, 1038, 302, 1, 0, 0, 0, 1039, 1040, 7, 46, 0, 0, 1040, 304, 1, 0, 0, 0, 1041, 1042, 7, 47, 0, 0, 1042, 306, 1, 0, 0, 0, 38, 0, 720, 722, 729, 731, 735, 755, 763, 770, 773, 779, 782, 786, 790, 794, 800, 807, 812, 818, 824, 826, 830, 835, 840, 847, 868, 935, 939, 945, 951, 956, 970, 978, 980, 991, 995, 999, 1001, 0]
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.py
new file mode 100644
index 00000000..6100f671
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.py
@@ -0,0 +1,742 @@
+# Generated from Lcypher.g4 by ANTLR 4.13.2
+from antlr4 import *
+from io import StringIO
+import sys
+if sys.version_info[1] > 5:
+ from typing import TextIO
+else:
+ from typing.io import TextIO
+
+
+def serializedATN():
+ return [
+ 4,0,133,1043,6,-1,2,0,7,0,2,1,7,1,2,2,7,2,2,3,7,3,2,4,7,4,2,5,7,
+ 5,2,6,7,6,2,7,7,7,2,8,7,8,2,9,7,9,2,10,7,10,2,11,7,11,2,12,7,12,
+ 2,13,7,13,2,14,7,14,2,15,7,15,2,16,7,16,2,17,7,17,2,18,7,18,2,19,
+ 7,19,2,20,7,20,2,21,7,21,2,22,7,22,2,23,7,23,2,24,7,24,2,25,7,25,
+ 2,26,7,26,2,27,7,27,2,28,7,28,2,29,7,29,2,30,7,30,2,31,7,31,2,32,
+ 7,32,2,33,7,33,2,34,7,34,2,35,7,35,2,36,7,36,2,37,7,37,2,38,7,38,
+ 2,39,7,39,2,40,7,40,2,41,7,41,2,42,7,42,2,43,7,43,2,44,7,44,2,45,
+ 7,45,2,46,7,46,2,47,7,47,2,48,7,48,2,49,7,49,2,50,7,50,2,51,7,51,
+ 2,52,7,52,2,53,7,53,2,54,7,54,2,55,7,55,2,56,7,56,2,57,7,57,2,58,
+ 7,58,2,59,7,59,2,60,7,60,2,61,7,61,2,62,7,62,2,63,7,63,2,64,7,64,
+ 2,65,7,65,2,66,7,66,2,67,7,67,2,68,7,68,2,69,7,69,2,70,7,70,2,71,
+ 7,71,2,72,7,72,2,73,7,73,2,74,7,74,2,75,7,75,2,76,7,76,2,77,7,77,
+ 2,78,7,78,2,79,7,79,2,80,7,80,2,81,7,81,2,82,7,82,2,83,7,83,2,84,
+ 7,84,2,85,7,85,2,86,7,86,2,87,7,87,2,88,7,88,2,89,7,89,2,90,7,90,
+ 2,91,7,91,2,92,7,92,2,93,7,93,2,94,7,94,2,95,7,95,2,96,7,96,2,97,
+ 7,97,2,98,7,98,2,99,7,99,2,100,7,100,2,101,7,101,2,102,7,102,2,103,
+ 7,103,2,104,7,104,2,105,7,105,2,106,7,106,2,107,7,107,2,108,7,108,
+ 2,109,7,109,2,110,7,110,2,111,7,111,2,112,7,112,2,113,7,113,2,114,
+ 7,114,2,115,7,115,2,116,7,116,2,117,7,117,2,118,7,118,2,119,7,119,
+ 2,120,7,120,2,121,7,121,2,122,7,122,2,123,7,123,2,124,7,124,2,125,
+ 7,125,2,126,7,126,2,127,7,127,2,128,7,128,2,129,7,129,2,130,7,130,
+ 2,131,7,131,2,132,7,132,2,133,7,133,2,134,7,134,2,135,7,135,2,136,
+ 7,136,2,137,7,137,2,138,7,138,2,139,7,139,2,140,7,140,2,141,7,141,
+ 2,142,7,142,2,143,7,143,2,144,7,144,2,145,7,145,2,146,7,146,2,147,
+ 7,147,2,148,7,148,2,149,7,149,2,150,7,150,2,151,7,151,2,152,7,152,
+ 1,0,1,0,1,1,1,1,1,2,1,2,1,3,1,3,1,3,1,4,1,4,1,5,1,5,1,6,1,6,1,7,
+ 1,7,1,8,1,8,1,9,1,9,1,10,1,10,1,11,1,11,1,11,1,12,1,12,1,13,1,13,
+ 1,14,1,14,1,15,1,15,1,16,1,16,1,17,1,17,1,17,1,18,1,18,1,19,1,19,
+ 1,20,1,20,1,20,1,21,1,21,1,21,1,22,1,22,1,23,1,23,1,24,1,24,1,25,
+ 1,25,1,26,1,26,1,27,1,27,1,28,1,28,1,29,1,29,1,30,1,30,1,31,1,31,
+ 1,32,1,32,1,33,1,33,1,34,1,34,1,35,1,35,1,36,1,36,1,37,1,37,1,38,
+ 1,38,1,39,1,39,1,40,1,40,1,41,1,41,1,42,1,42,1,43,1,43,1,44,1,44,
+ 1,45,1,45,1,45,1,45,1,45,1,45,1,45,1,45,1,46,1,46,1,46,1,46,1,46,
+ 1,46,1,46,1,46,1,47,1,47,1,47,1,47,1,47,1,47,1,48,1,48,1,48,1,48,
+ 1,49,1,49,1,49,1,49,1,49,1,49,1,49,1,49,1,49,1,50,1,50,1,50,1,50,
+ 1,50,1,50,1,51,1,51,1,51,1,51,1,51,1,51,1,51,1,52,1,52,1,52,1,53,
+ 1,53,1,53,1,53,1,53,1,53,1,54,1,54,1,54,1,55,1,55,1,55,1,55,1,55,
+ 1,55,1,55,1,56,1,56,1,56,1,56,1,57,1,57,1,57,1,57,1,57,1,57,1,57,
+ 1,58,1,58,1,58,1,58,1,58,1,58,1,58,1,59,1,59,1,59,1,59,1,59,1,59,
+ 1,59,1,60,1,60,1,60,1,60,1,60,1,61,1,61,1,61,1,61,1,61,1,61,1,62,
+ 1,62,1,62,1,62,1,62,1,63,1,63,1,63,1,63,1,63,1,63,1,63,1,63,1,63,
+ 1,64,1,64,1,64,1,64,1,64,1,64,1,64,1,65,1,65,1,65,1,65,1,65,1,65,
+ 1,66,1,66,1,66,1,67,1,67,1,67,1,67,1,67,1,68,1,68,1,68,1,68,1,68,
+ 1,68,1,69,1,69,1,69,1,69,1,69,1,69,1,69,1,69,1,69,1,69,1,70,1,70,
+ 1,70,1,70,1,71,1,71,1,71,1,71,1,71,1,71,1,71,1,71,1,71,1,71,1,71,
+ 1,72,1,72,1,72,1,72,1,72,1,73,1,73,1,73,1,73,1,73,1,73,1,74,1,74,
+ 1,74,1,74,1,74,1,75,1,75,1,75,1,75,1,75,1,75,1,76,1,76,1,76,1,76,
+ 1,76,1,76,1,77,1,77,1,77,1,78,1,78,1,78,1,78,1,79,1,79,1,79,1,79,
+ 1,80,1,80,1,80,1,80,1,81,1,81,1,81,1,82,1,82,1,82,1,82,1,82,1,82,
+ 1,82,1,83,1,83,1,83,1,83,1,83,1,84,1,84,1,84,1,84,1,84,1,84,1,84,
+ 1,84,1,84,1,85,1,85,1,85,1,85,1,85,1,85,1,85,1,86,1,86,1,86,1,87,
+ 1,87,1,87,1,87,1,87,1,88,1,88,1,88,1,88,1,88,1,88,1,89,1,89,1,89,
+ 1,89,1,90,1,90,1,90,1,90,1,90,1,91,1,91,1,91,1,91,1,91,1,91,1,91,
+ 1,92,1,92,1,92,1,92,1,92,1,93,1,93,1,93,1,93,1,93,1,93,1,94,1,94,
+ 1,94,1,94,1,94,1,94,1,94,1,95,1,95,1,95,1,95,1,95,1,96,1,96,1,96,
+ 1,96,1,96,1,97,1,97,1,97,1,97,1,98,1,98,1,98,1,98,1,98,1,99,1,99,
+ 1,99,1,99,1,99,1,100,1,100,1,100,5,100,721,8,100,10,100,12,100,724,
+ 9,100,1,100,1,100,1,100,1,100,5,100,730,8,100,10,100,12,100,733,
+ 9,100,1,100,3,100,736,8,100,1,101,1,101,1,101,1,101,1,101,1,101,
+ 1,101,1,101,1,101,1,101,1,101,1,101,1,101,1,101,1,101,1,101,1,101,
+ 1,101,3,101,756,8,101,1,102,1,102,1,102,1,102,4,102,762,8,102,11,
+ 102,12,102,763,1,103,1,103,1,103,5,103,769,8,103,10,103,12,103,772,
+ 9,103,3,103,774,8,103,1,104,1,104,4,104,778,8,104,11,104,12,104,
+ 779,1,105,3,105,783,8,105,1,106,1,106,3,106,787,8,106,1,107,1,107,
+ 3,107,791,8,107,1,108,1,108,3,108,795,8,108,1,109,1,109,1,110,1,
+ 110,3,110,801,8,110,1,111,1,111,1,112,4,112,806,8,112,11,112,12,
+ 112,807,1,112,4,112,811,8,112,11,112,12,112,812,1,112,1,112,4,112,
+ 817,8,112,11,112,12,112,818,1,112,1,112,4,112,823,8,112,11,112,12,
+ 112,824,3,112,827,8,112,1,112,1,112,3,112,831,8,112,1,112,4,112,
+ 834,8,112,11,112,12,112,835,1,113,5,113,839,8,113,10,113,12,113,
+ 842,9,113,1,113,1,113,4,113,846,8,113,11,113,12,113,847,1,114,1,
+ 114,1,114,1,114,1,114,1,114,1,114,1,115,1,115,1,115,1,115,1,115,
+ 1,115,1,115,1,115,1,116,1,116,5,116,867,8,116,10,116,12,116,870,
+ 9,116,1,117,1,117,1,117,1,117,1,117,1,117,1,117,1,117,1,117,1,117,
+ 1,117,1,118,1,118,1,118,1,119,1,119,1,119,1,119,1,120,1,120,1,120,
+ 1,120,1,120,1,120,1,120,1,120,1,121,1,121,1,121,1,121,1,121,1,121,
+ 1,121,1,122,1,122,1,122,1,122,1,122,1,122,1,122,1,122,1,122,1,122,
+ 1,123,1,123,1,123,1,123,1,123,1,123,1,123,1,124,1,124,1,124,1,125,
+ 1,125,1,125,1,125,1,126,1,126,1,126,1,126,1,126,1,127,1,127,3,127,
+ 936,8,127,1,128,1,128,3,128,940,8,128,1,129,1,129,5,129,944,8,129,
+ 10,129,12,129,947,9,129,1,129,4,129,950,8,129,11,129,12,129,951,
+ 1,130,4,130,955,8,130,11,130,12,130,956,1,131,1,131,1,131,1,131,
+ 1,131,1,131,1,131,1,131,1,131,1,131,1,131,1,131,3,131,971,8,131,
+ 1,132,1,132,1,132,1,132,1,132,1,132,5,132,979,8,132,10,132,12,132,
+ 982,9,132,1,132,1,132,1,132,1,132,1,132,1,132,5,132,990,8,132,10,
+ 132,12,132,993,9,132,1,132,3,132,996,8,132,1,132,1,132,3,132,1000,
+ 8,132,3,132,1002,8,132,1,133,1,133,1,134,1,134,1,135,1,135,1,136,
+ 1,136,1,137,1,137,1,138,1,138,1,139,1,139,1,140,1,140,1,141,1,141,
+ 1,142,1,142,1,143,1,143,1,144,1,144,1,145,1,145,1,146,1,146,1,147,
+ 1,147,1,148,1,148,1,149,1,149,1,150,1,150,1,151,1,151,1,152,1,152,
+ 0,0,153,1,1,3,2,5,3,7,4,9,5,11,6,13,7,15,8,17,9,19,10,21,11,23,12,
+ 25,13,27,14,29,15,31,16,33,17,35,18,37,19,39,20,41,21,43,22,45,23,
+ 47,24,49,25,51,26,53,27,55,28,57,29,59,30,61,31,63,32,65,33,67,34,
+ 69,35,71,36,73,37,75,38,77,39,79,40,81,41,83,42,85,43,87,44,89,45,
+ 91,46,93,47,95,48,97,49,99,50,101,51,103,52,105,53,107,54,109,55,
+ 111,56,113,57,115,58,117,59,119,60,121,61,123,62,125,63,127,64,129,
+ 65,131,66,133,67,135,68,137,69,139,70,141,71,143,72,145,73,147,74,
+ 149,75,151,76,153,77,155,78,157,79,159,80,161,81,163,82,165,83,167,
+ 84,169,85,171,86,173,87,175,88,177,89,179,90,181,91,183,92,185,93,
+ 187,94,189,95,191,96,193,97,195,98,197,99,199,100,201,101,203,102,
+ 205,103,207,104,209,105,211,106,213,107,215,108,217,109,219,110,
+ 221,111,223,112,225,113,227,114,229,115,231,116,233,117,235,118,
+ 237,119,239,120,241,121,243,122,245,123,247,124,249,125,251,126,
+ 253,127,255,128,257,129,259,130,261,131,263,132,265,133,267,0,269,
+ 0,271,0,273,0,275,0,277,0,279,0,281,0,283,0,285,0,287,0,289,0,291,
+ 0,293,0,295,0,297,0,299,0,301,0,303,0,305,0,1,0,48,2,0,69,69,101,
+ 101,2,0,88,88,120,120,2,0,80,80,112,112,2,0,76,76,108,108,2,0,65,
+ 65,97,97,2,0,73,73,105,105,2,0,78,78,110,110,2,0,82,82,114,114,2,
+ 0,79,79,111,111,2,0,70,70,102,102,2,0,85,85,117,117,2,0,84,84,116,
+ 116,2,0,77,77,109,109,2,0,67,67,99,99,2,0,72,72,104,104,2,0,87,87,
+ 119,119,2,0,68,68,100,100,2,0,83,83,115,115,2,0,71,71,103,103,2,
+ 0,86,86,118,118,2,0,89,89,121,121,2,0,66,66,98,98,2,0,75,75,107,
+ 107,2,0,74,74,106,106,13,0,34,34,39,39,66,66,70,70,78,78,82,82,84,
+ 84,92,92,98,98,102,102,110,110,114,114,116,116,2,0,65,70,97,102,
+ 2,0,81,81,113,113,8,0,160,160,5760,5760,6158,6158,8192,8202,8232,
+ 8233,8239,8239,8287,8287,12288,12288,1,0,12,12,2,0,0,95,97,65535,
+ 1,0,30,30,429,0,48,57,65,90,95,95,97,122,170,170,181,181,183,183,
+ 186,186,192,214,216,246,248,705,710,721,736,740,748,748,750,750,
+ 768,884,886,887,890,893,902,906,908,908,910,929,931,1013,1015,1153,
+ 1155,1159,1162,1319,1329,1366,1369,1369,1377,1415,1425,1469,1471,
+ 1471,1473,1474,1476,1477,1479,1479,1488,1514,1520,1522,1552,1562,
+ 1568,1641,1646,1747,1749,1756,1759,1768,1770,1788,1791,1791,1808,
+ 1866,1869,1969,1984,2037,2042,2042,2048,2093,2112,2139,2208,2208,
+ 2210,2220,2276,2302,2304,2403,2406,2415,2417,2423,2425,2431,2433,
+ 2435,2437,2444,2447,2448,2451,2472,2474,2480,2482,2482,2486,2489,
+ 2492,2500,2503,2504,2507,2510,2519,2519,2524,2525,2527,2531,2534,
+ 2545,2561,2563,2565,2570,2575,2576,2579,2600,2602,2608,2610,2611,
+ 2613,2614,2616,2617,2620,2620,2622,2626,2631,2632,2635,2637,2641,
+ 2641,2649,2652,2654,2654,2662,2677,2689,2691,2693,2701,2703,2705,
+ 2707,2728,2730,2736,2738,2739,2741,2745,2748,2757,2759,2761,2763,
+ 2765,2768,2768,2784,2787,2790,2799,2817,2819,2821,2828,2831,2832,
+ 2835,2856,2858,2864,2866,2867,2869,2873,2876,2884,2887,2888,2891,
+ 2893,2902,2903,2908,2909,2911,2915,2918,2927,2929,2929,2946,2947,
+ 2949,2954,2958,2960,2962,2965,2969,2970,2972,2972,2974,2975,2979,
+ 2980,2984,2986,2990,3001,3006,3010,3014,3016,3018,3021,3024,3024,
+ 3031,3031,3046,3055,3073,3075,3077,3084,3086,3088,3090,3112,3114,
+ 3123,3125,3129,3133,3140,3142,3144,3146,3149,3157,3158,3160,3161,
+ 3168,3171,3174,3183,3202,3203,3205,3212,3214,3216,3218,3240,3242,
+ 3251,3253,3257,3260,3268,3270,3272,3274,3277,3285,3286,3294,3294,
+ 3296,3299,3302,3311,3313,3314,3330,3331,3333,3340,3342,3344,3346,
+ 3386,3389,3396,3398,3400,3402,3406,3415,3415,3424,3427,3430,3439,
+ 3450,3455,3458,3459,3461,3478,3482,3505,3507,3515,3517,3517,3520,
+ 3526,3530,3530,3535,3540,3542,3542,3544,3551,3570,3571,3585,3642,
+ 3648,3662,3664,3673,3713,3714,3716,3716,3719,3720,3722,3722,3725,
+ 3725,3732,3735,3737,3743,3745,3747,3749,3749,3751,3751,3754,3755,
+ 3757,3769,3771,3773,3776,3780,3782,3782,3784,3789,3792,3801,3804,
+ 3807,3840,3840,3864,3865,3872,3881,3893,3893,3895,3895,3897,3897,
+ 3902,3911,3913,3948,3953,3972,3974,3991,3993,4028,4038,4038,4096,
+ 4169,4176,4253,4256,4293,4295,4295,4301,4301,4304,4346,4348,4680,
+ 4682,4685,4688,4694,4696,4696,4698,4701,4704,4744,4746,4749,4752,
+ 4784,4786,4789,4792,4798,4800,4800,4802,4805,4808,4822,4824,4880,
+ 4882,4885,4888,4954,4957,4959,4969,4977,4992,5007,5024,5108,5121,
+ 5740,5743,5759,5761,5786,5792,5866,5870,5872,5888,5900,5902,5908,
+ 5920,5940,5952,5971,5984,5996,5998,6000,6002,6003,6016,6099,6103,
+ 6103,6108,6109,6112,6121,6155,6157,6160,6169,6176,6263,6272,6314,
+ 6320,6389,6400,6428,6432,6443,6448,6459,6470,6509,6512,6516,6528,
+ 6571,6576,6601,6608,6618,6656,6683,6688,6750,6752,6780,6783,6793,
+ 6800,6809,6823,6823,6912,6987,6992,7001,7019,7027,7040,7155,7168,
+ 7223,7232,7241,7245,7293,7376,7378,7380,7414,7424,7654,7676,7957,
+ 7960,7965,7968,8005,8008,8013,8016,8023,8025,8025,8027,8027,8029,
+ 8029,8031,8061,8064,8116,8118,8124,8126,8126,8130,8132,8134,8140,
+ 8144,8147,8150,8155,8160,8172,8178,8180,8182,8188,8255,8256,8276,
+ 8276,8305,8305,8319,8319,8336,8348,8400,8412,8417,8417,8421,8432,
+ 8450,8450,8455,8455,8458,8467,8469,8469,8472,8477,8484,8484,8486,
+ 8486,8488,8488,8490,8505,8508,8511,8517,8521,8526,8526,8544,8584,
+ 11264,11310,11312,11358,11360,11492,11499,11507,11520,11557,11559,
+ 11559,11565,11565,11568,11623,11631,11631,11647,11670,11680,11686,
+ 11688,11694,11696,11702,11704,11710,11712,11718,11720,11726,11728,
+ 11734,11736,11742,11744,11775,12293,12295,12321,12335,12337,12341,
+ 12344,12348,12353,12438,12441,12447,12449,12538,12540,12543,12549,
+ 12589,12593,12686,12704,12730,12784,12799,13312,19893,19968,40908,
+ 40960,42124,42192,42237,42240,42508,42512,42539,42560,42607,42612,
+ 42621,42623,42647,42655,42737,42775,42783,42786,42888,42891,42894,
+ 42896,42899,42912,42922,43000,43047,43072,43123,43136,43204,43216,
+ 43225,43232,43255,43259,43259,43264,43309,43312,43347,43360,43388,
+ 43392,43456,43471,43481,43520,43574,43584,43597,43600,43609,43616,
+ 43638,43642,43643,43648,43714,43739,43741,43744,43759,43762,43766,
+ 43777,43782,43785,43790,43793,43798,43808,43814,43816,43822,43968,
+ 44010,44012,44013,44016,44025,44032,55203,55216,55238,55243,55291,
+ 63744,64109,64112,64217,64256,64262,64275,64279,64285,64296,64298,
+ 64310,64312,64316,64318,64318,64320,64321,64323,64324,64326,64433,
+ 64467,64829,64848,64911,64914,64967,65008,65019,65024,65039,65056,
+ 65062,65075,65076,65101,65103,65136,65140,65142,65276,65296,65305,
+ 65313,65338,65343,65343,65345,65370,65382,65470,65474,65479,65482,
+ 65487,65490,65495,65498,65500,2,0,0,41,43,65535,3,0,0,38,40,91,93,
+ 65535,3,0,0,9,11,12,14,65535,2,0,0,46,48,65535,1,0,29,29,1,0,28,
+ 28,1,0,13,13,17,0,36,36,162,165,1423,1423,1547,1547,2546,2547,2555,
+ 2555,2801,2801,3065,3065,3647,3647,6107,6107,8352,8378,43064,43064,
+ 65020,65020,65129,65129,65284,65284,65504,65505,65509,65510,1,0,
+ 32,32,6,0,95,95,8255,8256,8276,8276,65075,65076,65101,65103,65343,
+ 65343,1,0,9,9,3,0,0,33,35,91,93,65535,1,0,10,10,1,0,11,11,1,0,31,
+ 31,370,0,65,90,97,122,170,170,181,181,186,186,192,214,216,246,248,
+ 705,710,721,736,740,748,748,750,750,880,884,886,887,890,893,902,
+ 902,904,906,908,908,910,929,931,1013,1015,1153,1162,1319,1329,1366,
+ 1369,1369,1377,1415,1488,1514,1520,1522,1568,1610,1646,1647,1649,
+ 1747,1749,1749,1765,1766,1774,1775,1786,1788,1791,1791,1808,1808,
+ 1810,1839,1869,1957,1969,1969,1994,2026,2036,2037,2042,2042,2048,
+ 2069,2074,2074,2084,2084,2088,2088,2112,2136,2208,2208,2210,2220,
+ 2308,2361,2365,2365,2384,2384,2392,2401,2417,2423,2425,2431,2437,
+ 2444,2447,2448,2451,2472,2474,2480,2482,2482,2486,2489,2493,2493,
+ 2510,2510,2524,2525,2527,2529,2544,2545,2565,2570,2575,2576,2579,
+ 2600,2602,2608,2610,2611,2613,2614,2616,2617,2649,2652,2654,2654,
+ 2674,2676,2693,2701,2703,2705,2707,2728,2730,2736,2738,2739,2741,
+ 2745,2749,2749,2768,2768,2784,2785,2821,2828,2831,2832,2835,2856,
+ 2858,2864,2866,2867,2869,2873,2877,2877,2908,2909,2911,2913,2929,
+ 2929,2947,2947,2949,2954,2958,2960,2962,2965,2969,2970,2972,2972,
+ 2974,2975,2979,2980,2984,2986,2990,3001,3024,3024,3077,3084,3086,
+ 3088,3090,3112,3114,3123,3125,3129,3133,3133,3160,3161,3168,3169,
+ 3205,3212,3214,3216,3218,3240,3242,3251,3253,3257,3261,3261,3294,
+ 3294,3296,3297,3313,3314,3333,3340,3342,3344,3346,3386,3389,3389,
+ 3406,3406,3424,3425,3450,3455,3461,3478,3482,3505,3507,3515,3517,
+ 3517,3520,3526,3585,3632,3634,3635,3648,3654,3713,3714,3716,3716,
+ 3719,3720,3722,3722,3725,3725,3732,3735,3737,3743,3745,3747,3749,
+ 3749,3751,3751,3754,3755,3757,3760,3762,3763,3773,3773,3776,3780,
+ 3782,3782,3804,3807,3840,3840,3904,3911,3913,3948,3976,3980,4096,
+ 4138,4159,4159,4176,4181,4186,4189,4193,4193,4197,4198,4206,4208,
+ 4213,4225,4238,4238,4256,4293,4295,4295,4301,4301,4304,4346,4348,
+ 4680,4682,4685,4688,4694,4696,4696,4698,4701,4704,4744,4746,4749,
+ 4752,4784,4786,4789,4792,4798,4800,4800,4802,4805,4808,4822,4824,
+ 4880,4882,4885,4888,4954,4992,5007,5024,5108,5121,5740,5743,5759,
+ 5761,5786,5792,5866,5870,5872,5888,5900,5902,5905,5920,5937,5952,
+ 5969,5984,5996,5998,6000,6016,6067,6103,6103,6108,6108,6176,6263,
+ 6272,6312,6314,6314,6320,6389,6400,6428,6480,6509,6512,6516,6528,
+ 6571,6593,6599,6656,6678,6688,6740,6823,6823,6917,6963,6981,6987,
+ 7043,7072,7086,7087,7098,7141,7168,7203,7245,7247,7258,7293,7401,
+ 7404,7406,7409,7413,7414,7424,7615,7680,7957,7960,7965,7968,8005,
+ 8008,8013,8016,8023,8025,8025,8027,8027,8029,8029,8031,8061,8064,
+ 8116,8118,8124,8126,8126,8130,8132,8134,8140,8144,8147,8150,8155,
+ 8160,8172,8178,8180,8182,8188,8305,8305,8319,8319,8336,8348,8450,
+ 8450,8455,8455,8458,8467,8469,8469,8472,8477,8484,8484,8486,8486,
+ 8488,8488,8490,8505,8508,8511,8517,8521,8526,8526,8544,8584,11264,
+ 11310,11312,11358,11360,11492,11499,11502,11506,11507,11520,11557,
+ 11559,11559,11565,11565,11568,11623,11631,11631,11648,11670,11680,
+ 11686,11688,11694,11696,11702,11704,11710,11712,11718,11720,11726,
+ 11728,11734,11736,11742,12293,12295,12321,12329,12337,12341,12344,
+ 12348,12353,12438,12443,12447,12449,12538,12540,12543,12549,12589,
+ 12593,12686,12704,12730,12784,12799,13312,19893,19968,40908,40960,
+ 42124,42192,42237,42240,42508,42512,42527,42538,42539,42560,42606,
+ 42623,42647,42656,42735,42775,42783,42786,42888,42891,42894,42896,
+ 42899,42912,42922,43000,43009,43011,43013,43015,43018,43020,43042,
+ 43072,43123,43138,43187,43250,43255,43259,43259,43274,43301,43312,
+ 43334,43360,43388,43396,43442,43471,43471,43520,43560,43584,43586,
+ 43588,43595,43616,43638,43642,43642,43648,43695,43697,43697,43701,
+ 43702,43705,43709,43712,43712,43714,43714,43739,43741,43744,43754,
+ 43762,43764,43777,43782,43785,43790,43793,43798,43808,43814,43816,
+ 43822,43968,44002,44032,55203,55216,55238,55243,55291,63744,64109,
+ 64112,64217,64256,64262,64275,64279,64285,64285,64287,64296,64298,
+ 64310,64312,64316,64318,64318,64320,64321,64323,64324,64326,64433,
+ 64467,64829,64848,64911,64914,64967,65008,65019,65136,65140,65142,
+ 65276,65313,65338,65345,65370,65382,65470,65474,65479,65482,65487,
+ 65490,65495,65498,65500,1070,0,1,1,0,0,0,0,3,1,0,0,0,0,5,1,0,0,0,
+ 0,7,1,0,0,0,0,9,1,0,0,0,0,11,1,0,0,0,0,13,1,0,0,0,0,15,1,0,0,0,0,
+ 17,1,0,0,0,0,19,1,0,0,0,0,21,1,0,0,0,0,23,1,0,0,0,0,25,1,0,0,0,0,
+ 27,1,0,0,0,0,29,1,0,0,0,0,31,1,0,0,0,0,33,1,0,0,0,0,35,1,0,0,0,0,
+ 37,1,0,0,0,0,39,1,0,0,0,0,41,1,0,0,0,0,43,1,0,0,0,0,45,1,0,0,0,0,
+ 47,1,0,0,0,0,49,1,0,0,0,0,51,1,0,0,0,0,53,1,0,0,0,0,55,1,0,0,0,0,
+ 57,1,0,0,0,0,59,1,0,0,0,0,61,1,0,0,0,0,63,1,0,0,0,0,65,1,0,0,0,0,
+ 67,1,0,0,0,0,69,1,0,0,0,0,71,1,0,0,0,0,73,1,0,0,0,0,75,1,0,0,0,0,
+ 77,1,0,0,0,0,79,1,0,0,0,0,81,1,0,0,0,0,83,1,0,0,0,0,85,1,0,0,0,0,
+ 87,1,0,0,0,0,89,1,0,0,0,0,91,1,0,0,0,0,93,1,0,0,0,0,95,1,0,0,0,0,
+ 97,1,0,0,0,0,99,1,0,0,0,0,101,1,0,0,0,0,103,1,0,0,0,0,105,1,0,0,
+ 0,0,107,1,0,0,0,0,109,1,0,0,0,0,111,1,0,0,0,0,113,1,0,0,0,0,115,
+ 1,0,0,0,0,117,1,0,0,0,0,119,1,0,0,0,0,121,1,0,0,0,0,123,1,0,0,0,
+ 0,125,1,0,0,0,0,127,1,0,0,0,0,129,1,0,0,0,0,131,1,0,0,0,0,133,1,
+ 0,0,0,0,135,1,0,0,0,0,137,1,0,0,0,0,139,1,0,0,0,0,141,1,0,0,0,0,
+ 143,1,0,0,0,0,145,1,0,0,0,0,147,1,0,0,0,0,149,1,0,0,0,0,151,1,0,
+ 0,0,0,153,1,0,0,0,0,155,1,0,0,0,0,157,1,0,0,0,0,159,1,0,0,0,0,161,
+ 1,0,0,0,0,163,1,0,0,0,0,165,1,0,0,0,0,167,1,0,0,0,0,169,1,0,0,0,
+ 0,171,1,0,0,0,0,173,1,0,0,0,0,175,1,0,0,0,0,177,1,0,0,0,0,179,1,
+ 0,0,0,0,181,1,0,0,0,0,183,1,0,0,0,0,185,1,0,0,0,0,187,1,0,0,0,0,
+ 189,1,0,0,0,0,191,1,0,0,0,0,193,1,0,0,0,0,195,1,0,0,0,0,197,1,0,
+ 0,0,0,199,1,0,0,0,0,201,1,0,0,0,0,203,1,0,0,0,0,205,1,0,0,0,0,207,
+ 1,0,0,0,0,209,1,0,0,0,0,211,1,0,0,0,0,213,1,0,0,0,0,215,1,0,0,0,
+ 0,217,1,0,0,0,0,219,1,0,0,0,0,221,1,0,0,0,0,223,1,0,0,0,0,225,1,
+ 0,0,0,0,227,1,0,0,0,0,229,1,0,0,0,0,231,1,0,0,0,0,233,1,0,0,0,0,
+ 235,1,0,0,0,0,237,1,0,0,0,0,239,1,0,0,0,0,241,1,0,0,0,0,243,1,0,
+ 0,0,0,245,1,0,0,0,0,247,1,0,0,0,0,249,1,0,0,0,0,251,1,0,0,0,0,253,
+ 1,0,0,0,0,255,1,0,0,0,0,257,1,0,0,0,0,259,1,0,0,0,0,261,1,0,0,0,
+ 0,263,1,0,0,0,0,265,1,0,0,0,1,307,1,0,0,0,3,309,1,0,0,0,5,311,1,
+ 0,0,0,7,313,1,0,0,0,9,316,1,0,0,0,11,318,1,0,0,0,13,320,1,0,0,0,
+ 15,322,1,0,0,0,17,324,1,0,0,0,19,326,1,0,0,0,21,328,1,0,0,0,23,330,
+ 1,0,0,0,25,333,1,0,0,0,27,335,1,0,0,0,29,337,1,0,0,0,31,339,1,0,
+ 0,0,33,341,1,0,0,0,35,343,1,0,0,0,37,346,1,0,0,0,39,348,1,0,0,0,
+ 41,350,1,0,0,0,43,353,1,0,0,0,45,356,1,0,0,0,47,358,1,0,0,0,49,360,
+ 1,0,0,0,51,362,1,0,0,0,53,364,1,0,0,0,55,366,1,0,0,0,57,368,1,0,
+ 0,0,59,370,1,0,0,0,61,372,1,0,0,0,63,374,1,0,0,0,65,376,1,0,0,0,
+ 67,378,1,0,0,0,69,380,1,0,0,0,71,382,1,0,0,0,73,384,1,0,0,0,75,386,
+ 1,0,0,0,77,388,1,0,0,0,79,390,1,0,0,0,81,392,1,0,0,0,83,394,1,0,
+ 0,0,85,396,1,0,0,0,87,398,1,0,0,0,89,400,1,0,0,0,91,402,1,0,0,0,
+ 93,410,1,0,0,0,95,418,1,0,0,0,97,424,1,0,0,0,99,428,1,0,0,0,101,
+ 437,1,0,0,0,103,443,1,0,0,0,105,450,1,0,0,0,107,453,1,0,0,0,109,
+ 459,1,0,0,0,111,462,1,0,0,0,113,469,1,0,0,0,115,473,1,0,0,0,117,
+ 480,1,0,0,0,119,487,1,0,0,0,121,494,1,0,0,0,123,499,1,0,0,0,125,
+ 505,1,0,0,0,127,510,1,0,0,0,129,519,1,0,0,0,131,526,1,0,0,0,133,
+ 532,1,0,0,0,135,535,1,0,0,0,137,540,1,0,0,0,139,546,1,0,0,0,141,
+ 556,1,0,0,0,143,560,1,0,0,0,145,571,1,0,0,0,147,576,1,0,0,0,149,
+ 582,1,0,0,0,151,587,1,0,0,0,153,593,1,0,0,0,155,599,1,0,0,0,157,
+ 602,1,0,0,0,159,606,1,0,0,0,161,610,1,0,0,0,163,614,1,0,0,0,165,
+ 617,1,0,0,0,167,624,1,0,0,0,169,629,1,0,0,0,171,638,1,0,0,0,173,
+ 645,1,0,0,0,175,648,1,0,0,0,177,653,1,0,0,0,179,659,1,0,0,0,181,
+ 663,1,0,0,0,183,668,1,0,0,0,185,675,1,0,0,0,187,680,1,0,0,0,189,
+ 686,1,0,0,0,191,693,1,0,0,0,193,698,1,0,0,0,195,703,1,0,0,0,197,
+ 707,1,0,0,0,199,712,1,0,0,0,201,735,1,0,0,0,203,737,1,0,0,0,205,
+ 757,1,0,0,0,207,773,1,0,0,0,209,775,1,0,0,0,211,782,1,0,0,0,213,
+ 786,1,0,0,0,215,790,1,0,0,0,217,794,1,0,0,0,219,796,1,0,0,0,221,
+ 800,1,0,0,0,223,802,1,0,0,0,225,826,1,0,0,0,227,840,1,0,0,0,229,
+ 849,1,0,0,0,231,856,1,0,0,0,233,864,1,0,0,0,235,871,1,0,0,0,237,
+ 882,1,0,0,0,239,885,1,0,0,0,241,889,1,0,0,0,243,897,1,0,0,0,245,
+ 904,1,0,0,0,247,914,1,0,0,0,249,921,1,0,0,0,251,924,1,0,0,0,253,
+ 928,1,0,0,0,255,935,1,0,0,0,257,939,1,0,0,0,259,949,1,0,0,0,261,
+ 954,1,0,0,0,263,970,1,0,0,0,265,1001,1,0,0,0,267,1003,1,0,0,0,269,
+ 1005,1,0,0,0,271,1007,1,0,0,0,273,1009,1,0,0,0,275,1011,1,0,0,0,
+ 277,1013,1,0,0,0,279,1015,1,0,0,0,281,1017,1,0,0,0,283,1019,1,0,
+ 0,0,285,1021,1,0,0,0,287,1023,1,0,0,0,289,1025,1,0,0,0,291,1027,
+ 1,0,0,0,293,1029,1,0,0,0,295,1031,1,0,0,0,297,1033,1,0,0,0,299,1035,
+ 1,0,0,0,301,1037,1,0,0,0,303,1039,1,0,0,0,305,1041,1,0,0,0,307,308,
+ 5,59,0,0,308,2,1,0,0,0,309,310,5,44,0,0,310,4,1,0,0,0,311,312,5,
+ 61,0,0,312,6,1,0,0,0,313,314,5,43,0,0,314,315,5,61,0,0,315,8,1,0,
+ 0,0,316,317,5,42,0,0,317,10,1,0,0,0,318,319,5,40,0,0,319,12,1,0,
+ 0,0,320,321,5,41,0,0,321,14,1,0,0,0,322,323,5,91,0,0,323,16,1,0,
+ 0,0,324,325,5,93,0,0,325,18,1,0,0,0,326,327,5,58,0,0,327,20,1,0,
+ 0,0,328,329,5,124,0,0,329,22,1,0,0,0,330,331,5,46,0,0,331,332,5,
+ 46,0,0,332,24,1,0,0,0,333,334,5,43,0,0,334,26,1,0,0,0,335,336,5,
+ 45,0,0,336,28,1,0,0,0,337,338,5,47,0,0,338,30,1,0,0,0,339,340,5,
+ 37,0,0,340,32,1,0,0,0,341,342,5,94,0,0,342,34,1,0,0,0,343,344,5,
+ 60,0,0,344,345,5,62,0,0,345,36,1,0,0,0,346,347,5,60,0,0,347,38,1,
+ 0,0,0,348,349,5,62,0,0,349,40,1,0,0,0,350,351,5,60,0,0,351,352,5,
+ 61,0,0,352,42,1,0,0,0,353,354,5,62,0,0,354,355,5,61,0,0,355,44,1,
+ 0,0,0,356,357,5,46,0,0,357,46,1,0,0,0,358,359,5,123,0,0,359,48,1,
+ 0,0,0,360,361,5,125,0,0,361,50,1,0,0,0,362,363,5,36,0,0,363,52,1,
+ 0,0,0,364,365,5,10216,0,0,365,54,1,0,0,0,366,367,5,12296,0,0,367,
+ 56,1,0,0,0,368,369,5,65124,0,0,369,58,1,0,0,0,370,371,5,65308,0,
+ 0,371,60,1,0,0,0,372,373,5,10217,0,0,373,62,1,0,0,0,374,375,5,12297,
+ 0,0,375,64,1,0,0,0,376,377,5,65125,0,0,377,66,1,0,0,0,378,379,5,
+ 65310,0,0,379,68,1,0,0,0,380,381,5,173,0,0,381,70,1,0,0,0,382,383,
+ 5,8208,0,0,383,72,1,0,0,0,384,385,5,8209,0,0,385,74,1,0,0,0,386,
+ 387,5,8210,0,0,387,76,1,0,0,0,388,389,5,8211,0,0,389,78,1,0,0,0,
+ 390,391,5,8212,0,0,391,80,1,0,0,0,392,393,5,8213,0,0,393,82,1,0,
+ 0,0,394,395,5,8722,0,0,395,84,1,0,0,0,396,397,5,65112,0,0,397,86,
+ 1,0,0,0,398,399,5,65123,0,0,399,88,1,0,0,0,400,401,5,65293,0,0,401,
+ 90,1,0,0,0,402,403,7,0,0,0,403,404,7,1,0,0,404,405,7,2,0,0,405,406,
+ 7,3,0,0,406,407,7,4,0,0,407,408,7,5,0,0,408,409,7,6,0,0,409,92,1,
+ 0,0,0,410,411,7,2,0,0,411,412,7,7,0,0,412,413,7,8,0,0,413,414,7,
+ 9,0,0,414,415,7,5,0,0,415,416,7,3,0,0,416,417,7,0,0,0,417,94,1,0,
+ 0,0,418,419,7,10,0,0,419,420,7,6,0,0,420,421,7,5,0,0,421,422,7,8,
+ 0,0,422,423,7,6,0,0,423,96,1,0,0,0,424,425,7,4,0,0,425,426,7,3,0,
+ 0,426,427,7,3,0,0,427,98,1,0,0,0,428,429,7,8,0,0,429,430,7,2,0,0,
+ 430,431,7,11,0,0,431,432,7,5,0,0,432,433,7,8,0,0,433,434,7,6,0,0,
+ 434,435,7,4,0,0,435,436,7,3,0,0,436,100,1,0,0,0,437,438,7,12,0,0,
+ 438,439,7,4,0,0,439,440,7,11,0,0,440,441,7,13,0,0,441,442,7,14,0,
+ 0,442,102,1,0,0,0,443,444,7,10,0,0,444,445,7,6,0,0,445,446,7,15,
+ 0,0,446,447,7,5,0,0,447,448,7,6,0,0,448,449,7,16,0,0,449,104,1,0,
+ 0,0,450,451,7,4,0,0,451,452,7,17,0,0,452,106,1,0,0,0,453,454,7,12,
+ 0,0,454,455,7,0,0,0,455,456,7,7,0,0,456,457,7,18,0,0,457,458,7,0,
+ 0,0,458,108,1,0,0,0,459,460,7,8,0,0,460,461,7,6,0,0,461,110,1,0,
+ 0,0,462,463,7,13,0,0,463,464,7,7,0,0,464,465,7,0,0,0,465,466,7,4,
+ 0,0,466,467,7,11,0,0,467,468,7,0,0,0,468,112,1,0,0,0,469,470,7,17,
+ 0,0,470,471,7,0,0,0,471,472,7,11,0,0,472,114,1,0,0,0,473,474,7,16,
+ 0,0,474,475,7,0,0,0,475,476,7,11,0,0,476,477,7,4,0,0,477,478,7,13,
+ 0,0,478,479,7,14,0,0,479,116,1,0,0,0,480,481,7,16,0,0,481,482,7,
+ 0,0,0,482,483,7,3,0,0,483,484,7,0,0,0,484,485,7,11,0,0,485,486,7,
+ 0,0,0,486,118,1,0,0,0,487,488,7,7,0,0,488,489,7,0,0,0,489,490,7,
+ 12,0,0,490,491,7,8,0,0,491,492,7,19,0,0,492,493,7,0,0,0,493,120,
+ 1,0,0,0,494,495,7,13,0,0,495,496,7,4,0,0,496,497,7,3,0,0,497,498,
+ 7,3,0,0,498,122,1,0,0,0,499,500,7,20,0,0,500,501,7,5,0,0,501,502,
+ 7,0,0,0,502,503,7,3,0,0,503,504,7,16,0,0,504,124,1,0,0,0,505,506,
+ 7,15,0,0,506,507,7,5,0,0,507,508,7,11,0,0,508,509,7,14,0,0,509,126,
+ 1,0,0,0,510,511,7,16,0,0,511,512,7,5,0,0,512,513,7,17,0,0,513,514,
+ 7,11,0,0,514,515,7,5,0,0,515,516,7,6,0,0,516,517,7,13,0,0,517,518,
+ 7,11,0,0,518,128,1,0,0,0,519,520,7,7,0,0,520,521,7,0,0,0,521,522,
+ 7,11,0,0,522,523,7,10,0,0,523,524,7,7,0,0,524,525,7,6,0,0,525,130,
+ 1,0,0,0,526,527,7,8,0,0,527,528,7,7,0,0,528,529,7,16,0,0,529,530,
+ 7,0,0,0,530,531,7,7,0,0,531,132,1,0,0,0,532,533,7,21,0,0,533,534,
+ 7,20,0,0,534,134,1,0,0,0,535,536,7,17,0,0,536,537,7,22,0,0,537,538,
+ 7,5,0,0,538,539,7,2,0,0,539,136,1,0,0,0,540,541,7,3,0,0,541,542,
+ 7,5,0,0,542,543,7,12,0,0,543,544,7,5,0,0,544,545,7,11,0,0,545,138,
+ 1,0,0,0,546,547,7,4,0,0,547,548,7,17,0,0,548,549,7,13,0,0,549,550,
+ 7,0,0,0,550,551,7,6,0,0,551,552,7,16,0,0,552,553,7,5,0,0,553,554,
+ 7,6,0,0,554,555,7,18,0,0,555,140,1,0,0,0,556,557,7,4,0,0,557,558,
+ 7,17,0,0,558,559,7,13,0,0,559,142,1,0,0,0,560,561,7,16,0,0,561,562,
+ 7,0,0,0,562,563,7,17,0,0,563,564,7,13,0,0,564,565,7,0,0,0,565,566,
+ 7,6,0,0,566,567,7,16,0,0,567,568,7,5,0,0,568,569,7,6,0,0,569,570,
+ 7,18,0,0,570,144,1,0,0,0,571,572,7,16,0,0,572,573,7,0,0,0,573,574,
+ 7,17,0,0,574,575,7,13,0,0,575,146,1,0,0,0,576,577,7,10,0,0,577,578,
+ 7,17,0,0,578,579,7,5,0,0,579,580,7,6,0,0,580,581,7,18,0,0,581,148,
+ 1,0,0,0,582,583,7,23,0,0,583,584,7,8,0,0,584,585,7,5,0,0,585,586,
+ 7,6,0,0,586,150,1,0,0,0,587,588,7,17,0,0,588,589,7,11,0,0,589,590,
+ 7,4,0,0,590,591,7,7,0,0,591,592,7,11,0,0,592,152,1,0,0,0,593,594,
+ 7,15,0,0,594,595,7,14,0,0,595,596,7,0,0,0,596,597,7,7,0,0,597,598,
+ 7,0,0,0,598,154,1,0,0,0,599,600,7,8,0,0,600,601,7,7,0,0,601,156,
+ 1,0,0,0,602,603,7,1,0,0,603,604,7,8,0,0,604,605,7,7,0,0,605,158,
+ 1,0,0,0,606,607,7,4,0,0,607,608,7,6,0,0,608,609,7,16,0,0,609,160,
+ 1,0,0,0,610,611,7,6,0,0,611,612,7,8,0,0,612,613,7,11,0,0,613,162,
+ 1,0,0,0,614,615,7,5,0,0,615,616,7,6,0,0,616,164,1,0,0,0,617,618,
+ 7,17,0,0,618,619,7,11,0,0,619,620,7,4,0,0,620,621,7,7,0,0,621,622,
+ 7,11,0,0,622,623,7,17,0,0,623,166,1,0,0,0,624,625,7,0,0,0,625,626,
+ 7,6,0,0,626,627,7,16,0,0,627,628,7,17,0,0,628,168,1,0,0,0,629,630,
+ 7,13,0,0,630,631,7,8,0,0,631,632,7,6,0,0,632,633,7,11,0,0,633,634,
+ 7,4,0,0,634,635,7,5,0,0,635,636,7,6,0,0,636,637,7,17,0,0,637,170,
+ 1,0,0,0,638,639,7,7,0,0,639,640,7,0,0,0,640,641,7,18,0,0,641,642,
+ 7,0,0,0,642,643,7,1,0,0,643,644,7,2,0,0,644,172,1,0,0,0,645,646,
+ 7,5,0,0,646,647,7,17,0,0,647,174,1,0,0,0,648,649,7,6,0,0,649,650,
+ 7,10,0,0,650,651,7,3,0,0,651,652,7,3,0,0,652,176,1,0,0,0,653,654,
+ 7,13,0,0,654,655,7,8,0,0,655,656,7,10,0,0,656,657,7,6,0,0,657,658,
+ 7,11,0,0,658,178,1,0,0,0,659,660,7,4,0,0,660,661,7,6,0,0,661,662,
+ 7,20,0,0,662,180,1,0,0,0,663,664,7,6,0,0,664,665,7,8,0,0,665,666,
+ 7,6,0,0,666,667,7,0,0,0,667,182,1,0,0,0,668,669,7,17,0,0,669,670,
+ 7,5,0,0,670,671,7,6,0,0,671,672,7,18,0,0,672,673,7,3,0,0,673,674,
+ 7,0,0,0,674,184,1,0,0,0,675,676,7,11,0,0,676,677,7,7,0,0,677,678,
+ 7,10,0,0,678,679,7,0,0,0,679,186,1,0,0,0,680,681,7,9,0,0,681,682,
+ 7,4,0,0,682,683,7,3,0,0,683,684,7,17,0,0,684,685,7,0,0,0,685,188,
+ 1,0,0,0,686,687,7,0,0,0,687,688,7,1,0,0,688,689,7,5,0,0,689,690,
+ 7,17,0,0,690,691,7,11,0,0,691,692,7,17,0,0,692,190,1,0,0,0,693,694,
+ 7,13,0,0,694,695,7,4,0,0,695,696,7,17,0,0,696,697,7,0,0,0,697,192,
+ 1,0,0,0,698,699,7,0,0,0,699,700,7,3,0,0,700,701,7,17,0,0,701,702,
+ 7,0,0,0,702,194,1,0,0,0,703,704,7,0,0,0,704,705,7,6,0,0,705,706,
+ 7,16,0,0,706,196,1,0,0,0,707,708,7,15,0,0,708,709,7,14,0,0,709,710,
+ 7,0,0,0,710,711,7,6,0,0,711,198,1,0,0,0,712,713,7,11,0,0,713,714,
+ 7,14,0,0,714,715,7,0,0,0,715,716,7,6,0,0,716,200,1,0,0,0,717,722,
+ 5,34,0,0,718,721,3,297,148,0,719,721,3,203,101,0,720,718,1,0,0,0,
+ 720,719,1,0,0,0,721,724,1,0,0,0,722,720,1,0,0,0,722,723,1,0,0,0,
+ 723,725,1,0,0,0,724,722,1,0,0,0,725,736,5,34,0,0,726,731,5,39,0,
+ 0,727,730,3,277,138,0,728,730,3,203,101,0,729,727,1,0,0,0,729,728,
+ 1,0,0,0,730,733,1,0,0,0,731,729,1,0,0,0,731,732,1,0,0,0,732,734,
+ 1,0,0,0,733,731,1,0,0,0,734,736,5,39,0,0,735,717,1,0,0,0,735,726,
+ 1,0,0,0,736,202,1,0,0,0,737,755,5,92,0,0,738,756,7,24,0,0,739,740,
+ 7,10,0,0,740,741,3,213,106,0,741,742,3,213,106,0,742,743,3,213,106,
+ 0,743,744,3,213,106,0,744,756,1,0,0,0,745,746,7,10,0,0,746,747,3,
+ 213,106,0,747,748,3,213,106,0,748,749,3,213,106,0,749,750,3,213,
+ 106,0,750,751,3,213,106,0,751,752,3,213,106,0,752,753,3,213,106,
+ 0,753,754,3,213,106,0,754,756,1,0,0,0,755,738,1,0,0,0,755,739,1,
+ 0,0,0,755,745,1,0,0,0,756,204,1,0,0,0,757,758,5,48,0,0,758,759,5,
+ 120,0,0,759,761,1,0,0,0,760,762,3,213,106,0,761,760,1,0,0,0,762,
+ 763,1,0,0,0,763,761,1,0,0,0,763,764,1,0,0,0,764,206,1,0,0,0,765,
+ 774,3,223,111,0,766,770,3,217,108,0,767,769,3,215,107,0,768,767,
+ 1,0,0,0,769,772,1,0,0,0,770,768,1,0,0,0,770,771,1,0,0,0,771,774,
+ 1,0,0,0,772,770,1,0,0,0,773,765,1,0,0,0,773,766,1,0,0,0,774,208,
+ 1,0,0,0,775,777,3,223,111,0,776,778,3,221,110,0,777,776,1,0,0,0,
+ 778,779,1,0,0,0,779,777,1,0,0,0,779,780,1,0,0,0,780,210,1,0,0,0,
+ 781,783,7,25,0,0,782,781,1,0,0,0,783,212,1,0,0,0,784,787,3,215,107,
+ 0,785,787,3,211,105,0,786,784,1,0,0,0,786,785,1,0,0,0,787,214,1,
+ 0,0,0,788,791,3,223,111,0,789,791,3,217,108,0,790,788,1,0,0,0,790,
+ 789,1,0,0,0,791,216,1,0,0,0,792,795,3,219,109,0,793,795,2,56,57,
+ 0,794,792,1,0,0,0,794,793,1,0,0,0,795,218,1,0,0,0,796,797,2,49,55,
+ 0,797,220,1,0,0,0,798,801,3,223,111,0,799,801,3,219,109,0,800,798,
+ 1,0,0,0,800,799,1,0,0,0,801,222,1,0,0,0,802,803,5,48,0,0,803,224,
+ 1,0,0,0,804,806,3,215,107,0,805,804,1,0,0,0,806,807,1,0,0,0,807,
+ 805,1,0,0,0,807,808,1,0,0,0,808,827,1,0,0,0,809,811,3,215,107,0,
+ 810,809,1,0,0,0,811,812,1,0,0,0,812,810,1,0,0,0,812,813,1,0,0,0,
+ 813,814,1,0,0,0,814,816,5,46,0,0,815,817,3,215,107,0,816,815,1,0,
+ 0,0,817,818,1,0,0,0,818,816,1,0,0,0,818,819,1,0,0,0,819,827,1,0,
+ 0,0,820,822,5,46,0,0,821,823,3,215,107,0,822,821,1,0,0,0,823,824,
+ 1,0,0,0,824,822,1,0,0,0,824,825,1,0,0,0,825,827,1,0,0,0,826,805,
+ 1,0,0,0,826,810,1,0,0,0,826,820,1,0,0,0,827,828,1,0,0,0,828,830,
+ 7,0,0,0,829,831,5,45,0,0,830,829,1,0,0,0,830,831,1,0,0,0,831,833,
+ 1,0,0,0,832,834,3,215,107,0,833,832,1,0,0,0,834,835,1,0,0,0,835,
+ 833,1,0,0,0,835,836,1,0,0,0,836,226,1,0,0,0,837,839,3,215,107,0,
+ 838,837,1,0,0,0,839,842,1,0,0,0,840,838,1,0,0,0,840,841,1,0,0,0,
+ 841,843,1,0,0,0,842,840,1,0,0,0,843,845,5,46,0,0,844,846,3,215,107,
+ 0,845,844,1,0,0,0,846,847,1,0,0,0,847,845,1,0,0,0,847,848,1,0,0,
+ 0,848,228,1,0,0,0,849,850,7,9,0,0,850,851,7,5,0,0,851,852,7,3,0,
+ 0,852,853,7,11,0,0,853,854,7,0,0,0,854,855,7,7,0,0,855,230,1,0,0,
+ 0,856,857,7,0,0,0,857,858,7,1,0,0,858,859,7,11,0,0,859,860,7,7,0,
+ 0,860,861,7,4,0,0,861,862,7,13,0,0,862,863,7,11,0,0,863,232,1,0,
+ 0,0,864,868,3,255,127,0,865,867,3,257,128,0,866,865,1,0,0,0,867,
+ 870,1,0,0,0,868,866,1,0,0,0,868,869,1,0,0,0,869,234,1,0,0,0,870,
+ 868,1,0,0,0,871,872,7,13,0,0,872,873,7,8,0,0,873,874,7,6,0,0,874,
+ 875,7,17,0,0,875,876,7,11,0,0,876,877,7,7,0,0,877,878,7,4,0,0,878,
+ 879,7,5,0,0,879,880,7,6,0,0,880,881,7,11,0,0,881,236,1,0,0,0,882,
+ 883,7,16,0,0,883,884,7,8,0,0,884,238,1,0,0,0,885,886,7,9,0,0,886,
+ 887,7,8,0,0,887,888,7,7,0,0,888,240,1,0,0,0,889,890,7,7,0,0,890,
+ 891,7,0,0,0,891,892,7,26,0,0,892,893,7,10,0,0,893,894,7,5,0,0,894,
+ 895,7,7,0,0,895,896,7,0,0,0,896,242,1,0,0,0,897,898,7,10,0,0,898,
+ 899,7,6,0,0,899,900,7,5,0,0,900,901,7,26,0,0,901,902,7,10,0,0,902,
+ 903,7,0,0,0,903,244,1,0,0,0,904,905,7,12,0,0,905,906,7,4,0,0,906,
+ 907,7,6,0,0,907,908,7,16,0,0,908,909,7,4,0,0,909,910,7,11,0,0,910,
+ 911,7,8,0,0,911,912,7,7,0,0,912,913,7,20,0,0,913,246,1,0,0,0,914,
+ 915,7,17,0,0,915,916,7,13,0,0,916,917,7,4,0,0,917,918,7,3,0,0,918,
+ 919,7,4,0,0,919,920,7,7,0,0,920,248,1,0,0,0,921,922,7,8,0,0,922,
+ 923,7,9,0,0,923,250,1,0,0,0,924,925,7,4,0,0,925,926,7,16,0,0,926,
+ 927,7,16,0,0,927,252,1,0,0,0,928,929,7,16,0,0,929,930,7,7,0,0,930,
+ 931,7,8,0,0,931,932,7,2,0,0,932,254,1,0,0,0,933,936,3,305,152,0,
+ 934,936,3,293,146,0,935,933,1,0,0,0,935,934,1,0,0,0,936,256,1,0,
+ 0,0,937,940,3,273,136,0,938,940,3,289,144,0,939,937,1,0,0,0,939,
+ 938,1,0,0,0,940,258,1,0,0,0,941,945,5,96,0,0,942,944,3,269,134,0,
+ 943,942,1,0,0,0,944,947,1,0,0,0,945,943,1,0,0,0,945,946,1,0,0,0,
+ 946,948,1,0,0,0,947,945,1,0,0,0,948,950,5,96,0,0,949,941,1,0,0,0,
+ 950,951,1,0,0,0,951,949,1,0,0,0,951,952,1,0,0,0,952,260,1,0,0,0,
+ 953,955,3,263,131,0,954,953,1,0,0,0,955,956,1,0,0,0,956,954,1,0,
+ 0,0,956,957,1,0,0,0,957,262,1,0,0,0,958,971,3,291,145,0,959,971,
+ 3,295,147,0,960,971,3,299,149,0,961,971,3,301,150,0,962,971,3,267,
+ 133,0,963,971,3,287,143,0,964,971,3,285,142,0,965,971,3,283,141,
+ 0,966,971,3,271,135,0,967,971,3,303,151,0,968,971,7,27,0,0,969,971,
+ 3,265,132,0,970,958,1,0,0,0,970,959,1,0,0,0,970,960,1,0,0,0,970,
+ 961,1,0,0,0,970,962,1,0,0,0,970,963,1,0,0,0,970,964,1,0,0,0,970,
+ 965,1,0,0,0,970,966,1,0,0,0,970,967,1,0,0,0,970,968,1,0,0,0,970,
+ 969,1,0,0,0,971,264,1,0,0,0,972,973,5,47,0,0,973,974,5,42,0,0,974,
+ 980,1,0,0,0,975,979,3,275,137,0,976,977,5,42,0,0,977,979,3,281,140,
+ 0,978,975,1,0,0,0,978,976,1,0,0,0,979,982,1,0,0,0,980,978,1,0,0,
+ 0,980,981,1,0,0,0,981,983,1,0,0,0,982,980,1,0,0,0,983,984,5,42,0,
+ 0,984,1002,5,47,0,0,985,986,5,47,0,0,986,987,5,47,0,0,987,991,1,
+ 0,0,0,988,990,3,279,139,0,989,988,1,0,0,0,990,993,1,0,0,0,991,989,
+ 1,0,0,0,991,992,1,0,0,0,992,995,1,0,0,0,993,991,1,0,0,0,994,996,
+ 3,287,143,0,995,994,1,0,0,0,995,996,1,0,0,0,996,999,1,0,0,0,997,
+ 1000,3,299,149,0,998,1000,5,0,0,1,999,997,1,0,0,0,999,998,1,0,0,
+ 0,1000,1002,1,0,0,0,1001,972,1,0,0,0,1001,985,1,0,0,0,1002,266,1,
+ 0,0,0,1003,1004,7,28,0,0,1004,268,1,0,0,0,1005,1006,7,29,0,0,1006,
+ 270,1,0,0,0,1007,1008,7,30,0,0,1008,272,1,0,0,0,1009,1010,7,31,0,
+ 0,1010,274,1,0,0,0,1011,1012,7,32,0,0,1012,276,1,0,0,0,1013,1014,
+ 7,33,0,0,1014,278,1,0,0,0,1015,1016,7,34,0,0,1016,280,1,0,0,0,1017,
+ 1018,7,35,0,0,1018,282,1,0,0,0,1019,1020,7,36,0,0,1020,284,1,0,0,
+ 0,1021,1022,7,37,0,0,1022,286,1,0,0,0,1023,1024,7,38,0,0,1024,288,
+ 1,0,0,0,1025,1026,7,39,0,0,1026,290,1,0,0,0,1027,1028,7,40,0,0,1028,
+ 292,1,0,0,0,1029,1030,7,41,0,0,1030,294,1,0,0,0,1031,1032,7,42,0,
+ 0,1032,296,1,0,0,0,1033,1034,7,43,0,0,1034,298,1,0,0,0,1035,1036,
+ 7,44,0,0,1036,300,1,0,0,0,1037,1038,7,45,0,0,1038,302,1,0,0,0,1039,
+ 1040,7,46,0,0,1040,304,1,0,0,0,1041,1042,7,47,0,0,1042,306,1,0,0,
+ 0,38,0,720,722,729,731,735,755,763,770,773,779,782,786,790,794,800,
+ 807,812,818,824,826,830,835,840,847,868,935,939,945,951,956,970,
+ 978,980,991,995,999,1001,0
+ ]
+
+class LcypherLexer(Lexer):
+
+ atn = ATNDeserializer().deserialize(serializedATN())
+
+ decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+ T__0 = 1
+ T__1 = 2
+ T__2 = 3
+ T__3 = 4
+ T__4 = 5
+ T__5 = 6
+ T__6 = 7
+ T__7 = 8
+ T__8 = 9
+ T__9 = 10
+ T__10 = 11
+ T__11 = 12
+ T__12 = 13
+ T__13 = 14
+ T__14 = 15
+ T__15 = 16
+ T__16 = 17
+ T__17 = 18
+ T__18 = 19
+ T__19 = 20
+ T__20 = 21
+ T__21 = 22
+ T__22 = 23
+ T__23 = 24
+ T__24 = 25
+ T__25 = 26
+ T__26 = 27
+ T__27 = 28
+ T__28 = 29
+ T__29 = 30
+ T__30 = 31
+ T__31 = 32
+ T__32 = 33
+ T__33 = 34
+ T__34 = 35
+ T__35 = 36
+ T__36 = 37
+ T__37 = 38
+ T__38 = 39
+ T__39 = 40
+ T__40 = 41
+ T__41 = 42
+ T__42 = 43
+ T__43 = 44
+ T__44 = 45
+ EXPLAIN = 46
+ PROFILE = 47
+ UNION = 48
+ ALL = 49
+ OPTIONAL_ = 50
+ MATCH = 51
+ UNWIND = 52
+ AS = 53
+ MERGE = 54
+ ON = 55
+ CREATE = 56
+ SET = 57
+ DETACH = 58
+ DELETE_ = 59
+ REMOVE = 60
+ CALL = 61
+ YIELD = 62
+ WITH = 63
+ DISTINCT = 64
+ RETURN = 65
+ ORDER = 66
+ BY = 67
+ L_SKIP = 68
+ LIMIT = 69
+ ASCENDING = 70
+ ASC = 71
+ DESCENDING = 72
+ DESC = 73
+ USING = 74
+ JOIN = 75
+ START = 76
+ WHERE = 77
+ OR = 78
+ XOR = 79
+ AND = 80
+ NOT = 81
+ IN = 82
+ STARTS = 83
+ ENDS = 84
+ CONTAINS = 85
+ REGEXP = 86
+ IS = 87
+ NULL_ = 88
+ COUNT = 89
+ ANY = 90
+ NONE = 91
+ SINGLE = 92
+ TRUE_ = 93
+ FALSE_ = 94
+ EXISTS = 95
+ CASE = 96
+ ELSE = 97
+ END = 98
+ WHEN = 99
+ THEN = 100
+ StringLiteral = 101
+ EscapedChar = 102
+ HexInteger = 103
+ DecimalInteger = 104
+ OctalInteger = 105
+ HexLetter = 106
+ HexDigit = 107
+ Digit = 108
+ NonZeroDigit = 109
+ NonZeroOctDigit = 110
+ OctDigit = 111
+ ZeroDigit = 112
+ ExponentDecimalReal = 113
+ RegularDecimalReal = 114
+ FILTER = 115
+ EXTRACT = 116
+ UnescapedSymbolicName = 117
+ CONSTRAINT = 118
+ DO = 119
+ FOR = 120
+ REQUIRE = 121
+ UNIQUE = 122
+ MANDATORY = 123
+ SCALAR = 124
+ OF = 125
+ ADD = 126
+ DROP = 127
+ IdentifierStart = 128
+ IdentifierPart = 129
+ EscapedSymbolicName = 130
+ SP = 131
+ WHITESPACE = 132
+ Comment = 133
+
+ channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
+
+ modeNames = [ "DEFAULT_MODE" ]
+
+ literalNames = [ "",
+ "';'", "','", "'='", "'+='", "'*'", "'('", "')'", "'['", "']'",
+ "':'", "'|'", "'..'", "'+'", "'-'", "'/'", "'%'", "'^'", "'<>'",
+ "'<'", "'>'", "'<='", "'>='", "'.'", "'{'", "'}'", "'$'", "'\\u27E8'",
+ "'\\u3008'", "'\\uFE64'", "'\\uFF1C'", "'\\u27E9'", "'\\u3009'",
+ "'\\uFE65'", "'\\uFF1E'", "'\\u00AD'", "'\\u2010'", "'\\u2011'",
+ "'\\u2012'", "'\\u2013'", "'\\u2014'", "'\\u2015'", "'\\u2212'",
+ "'\\uFE58'", "'\\uFE63'", "'\\uFF0D'", "'0'" ]
+
+ symbolicNames = [ "",
+ "EXPLAIN", "PROFILE", "UNION", "ALL", "OPTIONAL_", "MATCH",
+ "UNWIND", "AS", "MERGE", "ON", "CREATE", "SET", "DETACH", "DELETE_",
+ "REMOVE", "CALL", "YIELD", "WITH", "DISTINCT", "RETURN", "ORDER",
+ "BY", "L_SKIP", "LIMIT", "ASCENDING", "ASC", "DESCENDING", "DESC",
+ "USING", "JOIN", "START", "WHERE", "OR", "XOR", "AND", "NOT",
+ "IN", "STARTS", "ENDS", "CONTAINS", "REGEXP", "IS", "NULL_",
+ "COUNT", "ANY", "NONE", "SINGLE", "TRUE_", "FALSE_", "EXISTS",
+ "CASE", "ELSE", "END", "WHEN", "THEN", "StringLiteral", "EscapedChar",
+ "HexInteger", "DecimalInteger", "OctalInteger", "HexLetter",
+ "HexDigit", "Digit", "NonZeroDigit", "NonZeroOctDigit", "OctDigit",
+ "ZeroDigit", "ExponentDecimalReal", "RegularDecimalReal", "FILTER",
+ "EXTRACT", "UnescapedSymbolicName", "CONSTRAINT", "DO", "FOR",
+ "REQUIRE", "UNIQUE", "MANDATORY", "SCALAR", "OF", "ADD", "DROP",
+ "IdentifierStart", "IdentifierPart", "EscapedSymbolicName",
+ "SP", "WHITESPACE", "Comment" ]
+
+ ruleNames = [ "T__0", "T__1", "T__2", "T__3", "T__4", "T__5", "T__6",
+ "T__7", "T__8", "T__9", "T__10", "T__11", "T__12", "T__13",
+ "T__14", "T__15", "T__16", "T__17", "T__18", "T__19",
+ "T__20", "T__21", "T__22", "T__23", "T__24", "T__25",
+ "T__26", "T__27", "T__28", "T__29", "T__30", "T__31",
+ "T__32", "T__33", "T__34", "T__35", "T__36", "T__37",
+ "T__38", "T__39", "T__40", "T__41", "T__42", "T__43",
+ "T__44", "EXPLAIN", "PROFILE", "UNION", "ALL", "OPTIONAL_",
+ "MATCH", "UNWIND", "AS", "MERGE", "ON", "CREATE", "SET",
+ "DETACH", "DELETE_", "REMOVE", "CALL", "YIELD", "WITH",
+ "DISTINCT", "RETURN", "ORDER", "BY", "L_SKIP", "LIMIT",
+ "ASCENDING", "ASC", "DESCENDING", "DESC", "USING", "JOIN",
+ "START", "WHERE", "OR", "XOR", "AND", "NOT", "IN", "STARTS",
+ "ENDS", "CONTAINS", "REGEXP", "IS", "NULL_", "COUNT",
+ "ANY", "NONE", "SINGLE", "TRUE_", "FALSE_", "EXISTS",
+ "CASE", "ELSE", "END", "WHEN", "THEN", "StringLiteral",
+ "EscapedChar", "HexInteger", "DecimalInteger", "OctalInteger",
+ "HexLetter", "HexDigit", "Digit", "NonZeroDigit", "NonZeroOctDigit",
+ "OctDigit", "ZeroDigit", "ExponentDecimalReal", "RegularDecimalReal",
+ "FILTER", "EXTRACT", "UnescapedSymbolicName", "CONSTRAINT",
+ "DO", "FOR", "REQUIRE", "UNIQUE", "MANDATORY", "SCALAR",
+ "OF", "ADD", "DROP", "IdentifierStart", "IdentifierPart",
+ "EscapedSymbolicName", "SP", "WHITESPACE", "Comment",
+ "FF", "EscapedSymbolicName_0", "RS", "ID_Continue", "Comment_1",
+ "StringLiteral_1", "Comment_3", "Comment_2", "GS", "FS",
+ "CR", "Sc", "SPACE", "Pc", "TAB", "StringLiteral_0", "LF",
+ "VT", "US", "ID_Start" ]
+
+ grammarFileName = "Lcypher.g4"
+
+ def __init__(self, input=None, output:TextIO = sys.stdout):
+ super().__init__(input, output)
+ self.checkVersion("4.13.2")
+ self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
+ self._actions = None
+ self._predicates = None
+
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.tokens b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.tokens
new file mode 100644
index 00000000..856ae310
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherLexer.tokens
@@ -0,0 +1,179 @@
+T__0=1
+T__1=2
+T__2=3
+T__3=4
+T__4=5
+T__5=6
+T__6=7
+T__7=8
+T__8=9
+T__9=10
+T__10=11
+T__11=12
+T__12=13
+T__13=14
+T__14=15
+T__15=16
+T__16=17
+T__17=18
+T__18=19
+T__19=20
+T__20=21
+T__21=22
+T__22=23
+T__23=24
+T__24=25
+T__25=26
+T__26=27
+T__27=28
+T__28=29
+T__29=30
+T__30=31
+T__31=32
+T__32=33
+T__33=34
+T__34=35
+T__35=36
+T__36=37
+T__37=38
+T__38=39
+T__39=40
+T__40=41
+T__41=42
+T__42=43
+T__43=44
+T__44=45
+EXPLAIN=46
+PROFILE=47
+UNION=48
+ALL=49
+OPTIONAL_=50
+MATCH=51
+UNWIND=52
+AS=53
+MERGE=54
+ON=55
+CREATE=56
+SET=57
+DETACH=58
+DELETE_=59
+REMOVE=60
+CALL=61
+YIELD=62
+WITH=63
+DISTINCT=64
+RETURN=65
+ORDER=66
+BY=67
+L_SKIP=68
+LIMIT=69
+ASCENDING=70
+ASC=71
+DESCENDING=72
+DESC=73
+USING=74
+JOIN=75
+START=76
+WHERE=77
+OR=78
+XOR=79
+AND=80
+NOT=81
+IN=82
+STARTS=83
+ENDS=84
+CONTAINS=85
+REGEXP=86
+IS=87
+NULL_=88
+COUNT=89
+ANY=90
+NONE=91
+SINGLE=92
+TRUE_=93
+FALSE_=94
+EXISTS=95
+CASE=96
+ELSE=97
+END=98
+WHEN=99
+THEN=100
+StringLiteral=101
+EscapedChar=102
+HexInteger=103
+DecimalInteger=104
+OctalInteger=105
+HexLetter=106
+HexDigit=107
+Digit=108
+NonZeroDigit=109
+NonZeroOctDigit=110
+OctDigit=111
+ZeroDigit=112
+ExponentDecimalReal=113
+RegularDecimalReal=114
+FILTER=115
+EXTRACT=116
+UnescapedSymbolicName=117
+CONSTRAINT=118
+DO=119
+FOR=120
+REQUIRE=121
+UNIQUE=122
+MANDATORY=123
+SCALAR=124
+OF=125
+ADD=126
+DROP=127
+IdentifierStart=128
+IdentifierPart=129
+EscapedSymbolicName=130
+SP=131
+WHITESPACE=132
+Comment=133
+';'=1
+','=2
+'='=3
+'+='=4
+'*'=5
+'('=6
+')'=7
+'['=8
+']'=9
+':'=10
+'|'=11
+'..'=12
+'+'=13
+'-'=14
+'/'=15
+'%'=16
+'^'=17
+'<>'=18
+'<'=19
+'>'=20
+'<='=21
+'>='=22
+'.'=23
+'{'=24
+'}'=25
+'$'=26
+'⟨'=27
+'〈'=28
+'﹤'=29
+'<'=30
+'⟩'=31
+'〉'=32
+'﹥'=33
+'>'=34
+''=35
+'‐'=36
+'‑'=37
+'‒'=38
+'–'=39
+'—'=40
+'―'=41
+'−'=42
+'﹘'=43
+'﹣'=44
+'-'=45
+'0'=112
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherListener.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherListener.py
new file mode 100644
index 00000000..b4dba09d
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherListener.py
@@ -0,0 +1,912 @@
+# Generated from Lcypher.g4 by ANTLR 4.13.2
+from antlr4 import *
+if "." in __name__:
+ from .LcypherParser import LcypherParser
+else:
+ from LcypherParser import LcypherParser
+
+# This class defines a complete listener for a parse tree produced by LcypherParser.
+class LcypherListener(ParseTreeListener):
+
+ # Enter a parse tree produced by LcypherParser#oC_Cypher.
+ def enterOC_Cypher(self, ctx:LcypherParser.OC_CypherContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Cypher.
+ def exitOC_Cypher(self, ctx:LcypherParser.OC_CypherContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Statement.
+ def enterOC_Statement(self, ctx:LcypherParser.OC_StatementContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Statement.
+ def exitOC_Statement(self, ctx:LcypherParser.OC_StatementContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Query.
+ def enterOC_Query(self, ctx:LcypherParser.OC_QueryContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Query.
+ def exitOC_Query(self, ctx:LcypherParser.OC_QueryContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RegularQuery.
+ def enterOC_RegularQuery(self, ctx:LcypherParser.OC_RegularQueryContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RegularQuery.
+ def exitOC_RegularQuery(self, ctx:LcypherParser.OC_RegularQueryContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Union.
+ def enterOC_Union(self, ctx:LcypherParser.OC_UnionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Union.
+ def exitOC_Union(self, ctx:LcypherParser.OC_UnionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SingleQuery.
+ def enterOC_SingleQuery(self, ctx:LcypherParser.OC_SingleQueryContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SingleQuery.
+ def exitOC_SingleQuery(self, ctx:LcypherParser.OC_SingleQueryContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SinglePartQuery.
+ def enterOC_SinglePartQuery(self, ctx:LcypherParser.OC_SinglePartQueryContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SinglePartQuery.
+ def exitOC_SinglePartQuery(self, ctx:LcypherParser.OC_SinglePartQueryContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_MultiPartQuery.
+ def enterOC_MultiPartQuery(self, ctx:LcypherParser.OC_MultiPartQueryContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_MultiPartQuery.
+ def exitOC_MultiPartQuery(self, ctx:LcypherParser.OC_MultiPartQueryContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_UpdatingClause.
+ def enterOC_UpdatingClause(self, ctx:LcypherParser.OC_UpdatingClauseContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_UpdatingClause.
+ def exitOC_UpdatingClause(self, ctx:LcypherParser.OC_UpdatingClauseContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ReadingClause.
+ def enterOC_ReadingClause(self, ctx:LcypherParser.OC_ReadingClauseContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ReadingClause.
+ def exitOC_ReadingClause(self, ctx:LcypherParser.OC_ReadingClauseContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Match.
+ def enterOC_Match(self, ctx:LcypherParser.OC_MatchContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Match.
+ def exitOC_Match(self, ctx:LcypherParser.OC_MatchContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Unwind.
+ def enterOC_Unwind(self, ctx:LcypherParser.OC_UnwindContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Unwind.
+ def exitOC_Unwind(self, ctx:LcypherParser.OC_UnwindContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Merge.
+ def enterOC_Merge(self, ctx:LcypherParser.OC_MergeContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Merge.
+ def exitOC_Merge(self, ctx:LcypherParser.OC_MergeContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_MergeAction.
+ def enterOC_MergeAction(self, ctx:LcypherParser.OC_MergeActionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_MergeAction.
+ def exitOC_MergeAction(self, ctx:LcypherParser.OC_MergeActionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Create.
+ def enterOC_Create(self, ctx:LcypherParser.OC_CreateContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Create.
+ def exitOC_Create(self, ctx:LcypherParser.OC_CreateContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Set.
+ def enterOC_Set(self, ctx:LcypherParser.OC_SetContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Set.
+ def exitOC_Set(self, ctx:LcypherParser.OC_SetContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SetItem.
+ def enterOC_SetItem(self, ctx:LcypherParser.OC_SetItemContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SetItem.
+ def exitOC_SetItem(self, ctx:LcypherParser.OC_SetItemContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Delete.
+ def enterOC_Delete(self, ctx:LcypherParser.OC_DeleteContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Delete.
+ def exitOC_Delete(self, ctx:LcypherParser.OC_DeleteContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Remove.
+ def enterOC_Remove(self, ctx:LcypherParser.OC_RemoveContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Remove.
+ def exitOC_Remove(self, ctx:LcypherParser.OC_RemoveContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RemoveItem.
+ def enterOC_RemoveItem(self, ctx:LcypherParser.OC_RemoveItemContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RemoveItem.
+ def exitOC_RemoveItem(self, ctx:LcypherParser.OC_RemoveItemContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_InQueryCall.
+ def enterOC_InQueryCall(self, ctx:LcypherParser.OC_InQueryCallContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_InQueryCall.
+ def exitOC_InQueryCall(self, ctx:LcypherParser.OC_InQueryCallContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_StandaloneCall.
+ def enterOC_StandaloneCall(self, ctx:LcypherParser.OC_StandaloneCallContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_StandaloneCall.
+ def exitOC_StandaloneCall(self, ctx:LcypherParser.OC_StandaloneCallContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_YieldItems.
+ def enterOC_YieldItems(self, ctx:LcypherParser.OC_YieldItemsContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_YieldItems.
+ def exitOC_YieldItems(self, ctx:LcypherParser.OC_YieldItemsContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_YieldItem.
+ def enterOC_YieldItem(self, ctx:LcypherParser.OC_YieldItemContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_YieldItem.
+ def exitOC_YieldItem(self, ctx:LcypherParser.OC_YieldItemContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_With.
+ def enterOC_With(self, ctx:LcypherParser.OC_WithContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_With.
+ def exitOC_With(self, ctx:LcypherParser.OC_WithContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Return.
+ def enterOC_Return(self, ctx:LcypherParser.OC_ReturnContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Return.
+ def exitOC_Return(self, ctx:LcypherParser.OC_ReturnContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ReturnBody.
+ def enterOC_ReturnBody(self, ctx:LcypherParser.OC_ReturnBodyContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ReturnBody.
+ def exitOC_ReturnBody(self, ctx:LcypherParser.OC_ReturnBodyContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ReturnItems.
+ def enterOC_ReturnItems(self, ctx:LcypherParser.OC_ReturnItemsContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ReturnItems.
+ def exitOC_ReturnItems(self, ctx:LcypherParser.OC_ReturnItemsContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ReturnItem.
+ def enterOC_ReturnItem(self, ctx:LcypherParser.OC_ReturnItemContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ReturnItem.
+ def exitOC_ReturnItem(self, ctx:LcypherParser.OC_ReturnItemContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Order.
+ def enterOC_Order(self, ctx:LcypherParser.OC_OrderContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Order.
+ def exitOC_Order(self, ctx:LcypherParser.OC_OrderContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Skip.
+ def enterOC_Skip(self, ctx:LcypherParser.OC_SkipContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Skip.
+ def exitOC_Skip(self, ctx:LcypherParser.OC_SkipContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Limit.
+ def enterOC_Limit(self, ctx:LcypherParser.OC_LimitContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Limit.
+ def exitOC_Limit(self, ctx:LcypherParser.OC_LimitContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SortItem.
+ def enterOC_SortItem(self, ctx:LcypherParser.OC_SortItemContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SortItem.
+ def exitOC_SortItem(self, ctx:LcypherParser.OC_SortItemContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Hint.
+ def enterOC_Hint(self, ctx:LcypherParser.OC_HintContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Hint.
+ def exitOC_Hint(self, ctx:LcypherParser.OC_HintContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Where.
+ def enterOC_Where(self, ctx:LcypherParser.OC_WhereContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Where.
+ def exitOC_Where(self, ctx:LcypherParser.OC_WhereContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Pattern.
+ def enterOC_Pattern(self, ctx:LcypherParser.OC_PatternContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Pattern.
+ def exitOC_Pattern(self, ctx:LcypherParser.OC_PatternContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PatternPart.
+ def enterOC_PatternPart(self, ctx:LcypherParser.OC_PatternPartContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PatternPart.
+ def exitOC_PatternPart(self, ctx:LcypherParser.OC_PatternPartContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_AnonymousPatternPart.
+ def enterOC_AnonymousPatternPart(self, ctx:LcypherParser.OC_AnonymousPatternPartContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_AnonymousPatternPart.
+ def exitOC_AnonymousPatternPart(self, ctx:LcypherParser.OC_AnonymousPatternPartContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PatternElement.
+ def enterOC_PatternElement(self, ctx:LcypherParser.OC_PatternElementContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PatternElement.
+ def exitOC_PatternElement(self, ctx:LcypherParser.OC_PatternElementContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NodePattern.
+ def enterOC_NodePattern(self, ctx:LcypherParser.OC_NodePatternContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NodePattern.
+ def exitOC_NodePattern(self, ctx:LcypherParser.OC_NodePatternContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PatternElementChain.
+ def enterOC_PatternElementChain(self, ctx:LcypherParser.OC_PatternElementChainContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PatternElementChain.
+ def exitOC_PatternElementChain(self, ctx:LcypherParser.OC_PatternElementChainContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RelationshipPattern.
+ def enterOC_RelationshipPattern(self, ctx:LcypherParser.OC_RelationshipPatternContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RelationshipPattern.
+ def exitOC_RelationshipPattern(self, ctx:LcypherParser.OC_RelationshipPatternContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RelationshipDetail.
+ def enterOC_RelationshipDetail(self, ctx:LcypherParser.OC_RelationshipDetailContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RelationshipDetail.
+ def exitOC_RelationshipDetail(self, ctx:LcypherParser.OC_RelationshipDetailContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Properties.
+ def enterOC_Properties(self, ctx:LcypherParser.OC_PropertiesContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Properties.
+ def exitOC_Properties(self, ctx:LcypherParser.OC_PropertiesContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RelationshipTypes.
+ def enterOC_RelationshipTypes(self, ctx:LcypherParser.OC_RelationshipTypesContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RelationshipTypes.
+ def exitOC_RelationshipTypes(self, ctx:LcypherParser.OC_RelationshipTypesContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NodeLabels.
+ def enterOC_NodeLabels(self, ctx:LcypherParser.OC_NodeLabelsContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NodeLabels.
+ def exitOC_NodeLabels(self, ctx:LcypherParser.OC_NodeLabelsContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NodeLabel.
+ def enterOC_NodeLabel(self, ctx:LcypherParser.OC_NodeLabelContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NodeLabel.
+ def exitOC_NodeLabel(self, ctx:LcypherParser.OC_NodeLabelContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RangeLiteral.
+ def enterOC_RangeLiteral(self, ctx:LcypherParser.OC_RangeLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RangeLiteral.
+ def exitOC_RangeLiteral(self, ctx:LcypherParser.OC_RangeLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_LabelName.
+ def enterOC_LabelName(self, ctx:LcypherParser.OC_LabelNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_LabelName.
+ def exitOC_LabelName(self, ctx:LcypherParser.OC_LabelNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RelTypeName.
+ def enterOC_RelTypeName(self, ctx:LcypherParser.OC_RelTypeNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RelTypeName.
+ def exitOC_RelTypeName(self, ctx:LcypherParser.OC_RelTypeNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Expression.
+ def enterOC_Expression(self, ctx:LcypherParser.OC_ExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Expression.
+ def exitOC_Expression(self, ctx:LcypherParser.OC_ExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_OrExpression.
+ def enterOC_OrExpression(self, ctx:LcypherParser.OC_OrExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_OrExpression.
+ def exitOC_OrExpression(self, ctx:LcypherParser.OC_OrExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_XorExpression.
+ def enterOC_XorExpression(self, ctx:LcypherParser.OC_XorExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_XorExpression.
+ def exitOC_XorExpression(self, ctx:LcypherParser.OC_XorExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_AndExpression.
+ def enterOC_AndExpression(self, ctx:LcypherParser.OC_AndExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_AndExpression.
+ def exitOC_AndExpression(self, ctx:LcypherParser.OC_AndExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NotExpression.
+ def enterOC_NotExpression(self, ctx:LcypherParser.OC_NotExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NotExpression.
+ def exitOC_NotExpression(self, ctx:LcypherParser.OC_NotExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ComparisonExpression.
+ def enterOC_ComparisonExpression(self, ctx:LcypherParser.OC_ComparisonExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ComparisonExpression.
+ def exitOC_ComparisonExpression(self, ctx:LcypherParser.OC_ComparisonExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_AddOrSubtractExpression.
+ def enterOC_AddOrSubtractExpression(self, ctx:LcypherParser.OC_AddOrSubtractExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_AddOrSubtractExpression.
+ def exitOC_AddOrSubtractExpression(self, ctx:LcypherParser.OC_AddOrSubtractExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_MultiplyDivideModuloExpression.
+ def enterOC_MultiplyDivideModuloExpression(self, ctx:LcypherParser.OC_MultiplyDivideModuloExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_MultiplyDivideModuloExpression.
+ def exitOC_MultiplyDivideModuloExpression(self, ctx:LcypherParser.OC_MultiplyDivideModuloExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PowerOfExpression.
+ def enterOC_PowerOfExpression(self, ctx:LcypherParser.OC_PowerOfExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PowerOfExpression.
+ def exitOC_PowerOfExpression(self, ctx:LcypherParser.OC_PowerOfExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_UnaryAddOrSubtractExpression.
+ def enterOC_UnaryAddOrSubtractExpression(self, ctx:LcypherParser.OC_UnaryAddOrSubtractExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_UnaryAddOrSubtractExpression.
+ def exitOC_UnaryAddOrSubtractExpression(self, ctx:LcypherParser.OC_UnaryAddOrSubtractExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_StringListNullOperatorExpression.
+ def enterOC_StringListNullOperatorExpression(self, ctx:LcypherParser.OC_StringListNullOperatorExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_StringListNullOperatorExpression.
+ def exitOC_StringListNullOperatorExpression(self, ctx:LcypherParser.OC_StringListNullOperatorExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ListOperatorExpression.
+ def enterOC_ListOperatorExpression(self, ctx:LcypherParser.OC_ListOperatorExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ListOperatorExpression.
+ def exitOC_ListOperatorExpression(self, ctx:LcypherParser.OC_ListOperatorExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_StringOperatorExpression.
+ def enterOC_StringOperatorExpression(self, ctx:LcypherParser.OC_StringOperatorExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_StringOperatorExpression.
+ def exitOC_StringOperatorExpression(self, ctx:LcypherParser.OC_StringOperatorExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NullOperatorExpression.
+ def enterOC_NullOperatorExpression(self, ctx:LcypherParser.OC_NullOperatorExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NullOperatorExpression.
+ def exitOC_NullOperatorExpression(self, ctx:LcypherParser.OC_NullOperatorExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PropertyOrLabelsExpression.
+ def enterOC_PropertyOrLabelsExpression(self, ctx:LcypherParser.OC_PropertyOrLabelsExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PropertyOrLabelsExpression.
+ def exitOC_PropertyOrLabelsExpression(self, ctx:LcypherParser.OC_PropertyOrLabelsExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Atom.
+ def enterOC_Atom(self, ctx:LcypherParser.OC_AtomContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Atom.
+ def exitOC_Atom(self, ctx:LcypherParser.OC_AtomContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Literal.
+ def enterOC_Literal(self, ctx:LcypherParser.OC_LiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Literal.
+ def exitOC_Literal(self, ctx:LcypherParser.OC_LiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_BooleanLiteral.
+ def enterOC_BooleanLiteral(self, ctx:LcypherParser.OC_BooleanLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_BooleanLiteral.
+ def exitOC_BooleanLiteral(self, ctx:LcypherParser.OC_BooleanLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ListLiteral.
+ def enterOC_ListLiteral(self, ctx:LcypherParser.OC_ListLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ListLiteral.
+ def exitOC_ListLiteral(self, ctx:LcypherParser.OC_ListLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PartialComparisonExpression.
+ def enterOC_PartialComparisonExpression(self, ctx:LcypherParser.OC_PartialComparisonExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PartialComparisonExpression.
+ def exitOC_PartialComparisonExpression(self, ctx:LcypherParser.OC_PartialComparisonExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ParenthesizedExpression.
+ def enterOC_ParenthesizedExpression(self, ctx:LcypherParser.OC_ParenthesizedExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ParenthesizedExpression.
+ def exitOC_ParenthesizedExpression(self, ctx:LcypherParser.OC_ParenthesizedExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RelationshipsPattern.
+ def enterOC_RelationshipsPattern(self, ctx:LcypherParser.OC_RelationshipsPatternContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RelationshipsPattern.
+ def exitOC_RelationshipsPattern(self, ctx:LcypherParser.OC_RelationshipsPatternContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_FilterExpression.
+ def enterOC_FilterExpression(self, ctx:LcypherParser.OC_FilterExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_FilterExpression.
+ def exitOC_FilterExpression(self, ctx:LcypherParser.OC_FilterExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_IdInColl.
+ def enterOC_IdInColl(self, ctx:LcypherParser.OC_IdInCollContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_IdInColl.
+ def exitOC_IdInColl(self, ctx:LcypherParser.OC_IdInCollContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_FunctionInvocation.
+ def enterOC_FunctionInvocation(self, ctx:LcypherParser.OC_FunctionInvocationContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_FunctionInvocation.
+ def exitOC_FunctionInvocation(self, ctx:LcypherParser.OC_FunctionInvocationContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_FunctionName.
+ def enterOC_FunctionName(self, ctx:LcypherParser.OC_FunctionNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_FunctionName.
+ def exitOC_FunctionName(self, ctx:LcypherParser.OC_FunctionNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ExplicitProcedureInvocation.
+ def enterOC_ExplicitProcedureInvocation(self, ctx:LcypherParser.OC_ExplicitProcedureInvocationContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ExplicitProcedureInvocation.
+ def exitOC_ExplicitProcedureInvocation(self, ctx:LcypherParser.OC_ExplicitProcedureInvocationContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ImplicitProcedureInvocation.
+ def enterOC_ImplicitProcedureInvocation(self, ctx:LcypherParser.OC_ImplicitProcedureInvocationContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ImplicitProcedureInvocation.
+ def exitOC_ImplicitProcedureInvocation(self, ctx:LcypherParser.OC_ImplicitProcedureInvocationContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ProcedureResultField.
+ def enterOC_ProcedureResultField(self, ctx:LcypherParser.OC_ProcedureResultFieldContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ProcedureResultField.
+ def exitOC_ProcedureResultField(self, ctx:LcypherParser.OC_ProcedureResultFieldContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ProcedureName.
+ def enterOC_ProcedureName(self, ctx:LcypherParser.OC_ProcedureNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ProcedureName.
+ def exitOC_ProcedureName(self, ctx:LcypherParser.OC_ProcedureNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Namespace.
+ def enterOC_Namespace(self, ctx:LcypherParser.OC_NamespaceContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Namespace.
+ def exitOC_Namespace(self, ctx:LcypherParser.OC_NamespaceContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ListComprehension.
+ def enterOC_ListComprehension(self, ctx:LcypherParser.OC_ListComprehensionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ListComprehension.
+ def exitOC_ListComprehension(self, ctx:LcypherParser.OC_ListComprehensionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PatternComprehension.
+ def enterOC_PatternComprehension(self, ctx:LcypherParser.OC_PatternComprehensionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PatternComprehension.
+ def exitOC_PatternComprehension(self, ctx:LcypherParser.OC_PatternComprehensionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PropertyLookup.
+ def enterOC_PropertyLookup(self, ctx:LcypherParser.OC_PropertyLookupContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PropertyLookup.
+ def exitOC_PropertyLookup(self, ctx:LcypherParser.OC_PropertyLookupContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_CaseExpression.
+ def enterOC_CaseExpression(self, ctx:LcypherParser.OC_CaseExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_CaseExpression.
+ def exitOC_CaseExpression(self, ctx:LcypherParser.OC_CaseExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_CaseAlternatives.
+ def enterOC_CaseAlternatives(self, ctx:LcypherParser.OC_CaseAlternativesContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_CaseAlternatives.
+ def exitOC_CaseAlternatives(self, ctx:LcypherParser.OC_CaseAlternativesContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Variable.
+ def enterOC_Variable(self, ctx:LcypherParser.OC_VariableContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Variable.
+ def exitOC_Variable(self, ctx:LcypherParser.OC_VariableContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_NumberLiteral.
+ def enterOC_NumberLiteral(self, ctx:LcypherParser.OC_NumberLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_NumberLiteral.
+ def exitOC_NumberLiteral(self, ctx:LcypherParser.OC_NumberLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_MapLiteral.
+ def enterOC_MapLiteral(self, ctx:LcypherParser.OC_MapLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_MapLiteral.
+ def exitOC_MapLiteral(self, ctx:LcypherParser.OC_MapLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Parameter.
+ def enterOC_Parameter(self, ctx:LcypherParser.OC_ParameterContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Parameter.
+ def exitOC_Parameter(self, ctx:LcypherParser.OC_ParameterContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PropertyExpression.
+ def enterOC_PropertyExpression(self, ctx:LcypherParser.OC_PropertyExpressionContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PropertyExpression.
+ def exitOC_PropertyExpression(self, ctx:LcypherParser.OC_PropertyExpressionContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_PropertyKeyName.
+ def enterOC_PropertyKeyName(self, ctx:LcypherParser.OC_PropertyKeyNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_PropertyKeyName.
+ def exitOC_PropertyKeyName(self, ctx:LcypherParser.OC_PropertyKeyNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_IntegerLiteral.
+ def enterOC_IntegerLiteral(self, ctx:LcypherParser.OC_IntegerLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_IntegerLiteral.
+ def exitOC_IntegerLiteral(self, ctx:LcypherParser.OC_IntegerLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_DoubleLiteral.
+ def enterOC_DoubleLiteral(self, ctx:LcypherParser.OC_DoubleLiteralContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_DoubleLiteral.
+ def exitOC_DoubleLiteral(self, ctx:LcypherParser.OC_DoubleLiteralContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SchemaName.
+ def enterOC_SchemaName(self, ctx:LcypherParser.OC_SchemaNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SchemaName.
+ def exitOC_SchemaName(self, ctx:LcypherParser.OC_SchemaNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_SymbolicName.
+ def enterOC_SymbolicName(self, ctx:LcypherParser.OC_SymbolicNameContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_SymbolicName.
+ def exitOC_SymbolicName(self, ctx:LcypherParser.OC_SymbolicNameContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_ReservedWord.
+ def enterOC_ReservedWord(self, ctx:LcypherParser.OC_ReservedWordContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_ReservedWord.
+ def exitOC_ReservedWord(self, ctx:LcypherParser.OC_ReservedWordContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_LeftArrowHead.
+ def enterOC_LeftArrowHead(self, ctx:LcypherParser.OC_LeftArrowHeadContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_LeftArrowHead.
+ def exitOC_LeftArrowHead(self, ctx:LcypherParser.OC_LeftArrowHeadContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_RightArrowHead.
+ def enterOC_RightArrowHead(self, ctx:LcypherParser.OC_RightArrowHeadContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_RightArrowHead.
+ def exitOC_RightArrowHead(self, ctx:LcypherParser.OC_RightArrowHeadContext):
+ pass
+
+
+ # Enter a parse tree produced by LcypherParser#oC_Dash.
+ def enterOC_Dash(self, ctx:LcypherParser.OC_DashContext):
+ pass
+
+ # Exit a parse tree produced by LcypherParser#oC_Dash.
+ def exitOC_Dash(self, ctx:LcypherParser.OC_DashContext):
+ pass
+
+
+
+del LcypherParser
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherParser.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherParser.py
new file mode 100644
index 00000000..9f632f79
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/LcypherParser.py
@@ -0,0 +1,9797 @@
+# Generated from Lcypher.g4 by ANTLR 4.13.2
+# encoding: utf-8
+from antlr4 import *
+from io import StringIO
+import sys
+if sys.version_info[1] > 5:
+ from typing import TextIO
+else:
+ from typing.io import TextIO
+
+def serializedATN():
+ return [
+ 4,1,133,1594,2,0,7,0,2,1,7,1,2,2,7,2,2,3,7,3,2,4,7,4,2,5,7,5,2,6,
+ 7,6,2,7,7,7,2,8,7,8,2,9,7,9,2,10,7,10,2,11,7,11,2,12,7,12,2,13,7,
+ 13,2,14,7,14,2,15,7,15,2,16,7,16,2,17,7,17,2,18,7,18,2,19,7,19,2,
+ 20,7,20,2,21,7,21,2,22,7,22,2,23,7,23,2,24,7,24,2,25,7,25,2,26,7,
+ 26,2,27,7,27,2,28,7,28,2,29,7,29,2,30,7,30,2,31,7,31,2,32,7,32,2,
+ 33,7,33,2,34,7,34,2,35,7,35,2,36,7,36,2,37,7,37,2,38,7,38,2,39,7,
+ 39,2,40,7,40,2,41,7,41,2,42,7,42,2,43,7,43,2,44,7,44,2,45,7,45,2,
+ 46,7,46,2,47,7,47,2,48,7,48,2,49,7,49,2,50,7,50,2,51,7,51,2,52,7,
+ 52,2,53,7,53,2,54,7,54,2,55,7,55,2,56,7,56,2,57,7,57,2,58,7,58,2,
+ 59,7,59,2,60,7,60,2,61,7,61,2,62,7,62,2,63,7,63,2,64,7,64,2,65,7,
+ 65,2,66,7,66,2,67,7,67,2,68,7,68,2,69,7,69,2,70,7,70,2,71,7,71,2,
+ 72,7,72,2,73,7,73,2,74,7,74,2,75,7,75,2,76,7,76,2,77,7,77,2,78,7,
+ 78,2,79,7,79,2,80,7,80,2,81,7,81,2,82,7,82,2,83,7,83,2,84,7,84,2,
+ 85,7,85,2,86,7,86,2,87,7,87,2,88,7,88,2,89,7,89,2,90,7,90,2,91,7,
+ 91,2,92,7,92,2,93,7,93,2,94,7,94,2,95,7,95,2,96,7,96,2,97,7,97,2,
+ 98,7,98,2,99,7,99,1,0,3,0,202,8,0,1,0,1,0,3,0,206,8,0,1,0,3,0,209,
+ 8,0,1,0,3,0,212,8,0,1,0,1,0,1,1,1,1,1,1,3,1,219,8,1,1,1,1,1,1,1,
+ 3,1,224,8,1,1,1,3,1,227,8,1,1,2,1,2,3,2,231,8,2,1,3,1,3,3,3,235,
+ 8,3,1,3,5,3,238,8,3,10,3,12,3,241,9,3,1,4,1,4,1,4,1,4,3,4,247,8,
+ 4,1,4,1,4,1,4,3,4,252,8,4,1,4,3,4,255,8,4,1,5,1,5,3,5,259,8,5,1,
+ 6,1,6,3,6,263,8,6,5,6,265,8,6,10,6,12,6,268,9,6,1,6,1,6,1,6,3,6,
+ 273,8,6,5,6,275,8,6,10,6,12,6,278,9,6,1,6,1,6,3,6,282,8,6,1,6,5,
+ 6,285,8,6,10,6,12,6,288,9,6,1,6,3,6,291,8,6,1,6,3,6,294,8,6,3,6,
+ 296,8,6,1,7,1,7,3,7,300,8,7,5,7,302,8,7,10,7,12,7,305,9,7,1,7,1,
+ 7,3,7,309,8,7,5,7,311,8,7,10,7,12,7,314,9,7,1,7,1,7,3,7,318,8,7,
+ 4,7,320,8,7,11,7,12,7,321,1,7,1,7,1,8,1,8,1,8,1,8,1,8,3,8,331,8,
+ 8,1,9,1,9,1,9,3,9,336,8,9,1,10,1,10,3,10,340,8,10,1,10,1,10,3,10,
+ 344,8,10,1,10,1,10,3,10,348,8,10,1,10,5,10,351,8,10,10,10,12,10,
+ 354,9,10,1,10,3,10,357,8,10,1,10,3,10,360,8,10,1,11,1,11,3,11,364,
+ 8,11,1,11,1,11,1,11,1,11,1,11,1,11,1,12,1,12,3,12,374,8,12,1,12,
+ 1,12,1,12,5,12,379,8,12,10,12,12,12,382,9,12,1,13,1,13,1,13,1,13,
+ 1,13,1,13,1,13,1,13,1,13,1,13,3,13,394,8,13,1,14,1,14,3,14,398,8,
+ 14,1,14,1,14,1,15,1,15,3,15,404,8,15,1,15,1,15,3,15,408,8,15,1,15,
+ 1,15,3,15,412,8,15,1,15,5,15,415,8,15,10,15,12,15,418,9,15,1,16,
+ 1,16,3,16,422,8,16,1,16,1,16,3,16,426,8,16,1,16,1,16,1,16,1,16,3,
+ 16,432,8,16,1,16,1,16,3,16,436,8,16,1,16,1,16,1,16,1,16,3,16,442,
+ 8,16,1,16,1,16,3,16,446,8,16,1,16,1,16,1,16,1,16,3,16,452,8,16,1,
+ 16,1,16,3,16,456,8,16,1,17,1,17,3,17,460,8,17,1,17,1,17,3,17,464,
+ 8,17,1,17,1,17,3,17,468,8,17,1,17,1,17,3,17,472,8,17,1,17,5,17,475,
+ 8,17,10,17,12,17,478,9,17,1,18,1,18,1,18,1,18,3,18,484,8,18,1,18,
+ 1,18,3,18,488,8,18,1,18,5,18,491,8,18,10,18,12,18,494,9,18,1,19,
+ 1,19,1,19,1,19,3,19,500,8,19,1,20,1,20,1,20,1,20,3,20,506,8,20,1,
+ 20,1,20,1,20,3,20,511,8,20,1,21,1,21,1,21,1,21,3,21,517,8,21,1,21,
+ 1,21,1,21,1,21,3,21,523,8,21,1,22,1,22,1,22,3,22,528,8,22,1,22,1,
+ 22,3,22,532,8,22,1,22,5,22,535,8,22,10,22,12,22,538,9,22,3,22,540,
+ 8,22,1,22,3,22,543,8,22,1,22,3,22,546,8,22,1,23,1,23,1,23,1,23,1,
+ 23,3,23,553,8,23,1,23,1,23,1,24,1,24,3,24,559,8,24,1,24,3,24,562,
+ 8,24,1,24,1,24,1,24,3,24,567,8,24,1,24,3,24,570,8,24,1,25,1,25,3,
+ 25,574,8,25,1,25,3,25,577,8,25,1,25,1,25,1,25,1,26,1,26,1,26,3,26,
+ 585,8,26,1,26,1,26,3,26,589,8,26,1,26,1,26,3,26,593,8,26,1,27,1,
+ 27,3,27,597,8,27,1,27,1,27,3,27,601,8,27,1,27,5,27,604,8,27,10,27,
+ 12,27,607,9,27,1,27,1,27,3,27,611,8,27,1,27,1,27,3,27,615,8,27,1,
+ 27,5,27,618,8,27,10,27,12,27,621,9,27,3,27,623,8,27,1,28,1,28,1,
+ 28,1,28,1,28,1,28,1,28,3,28,632,8,28,1,29,1,29,1,29,1,29,1,29,1,
+ 29,1,29,3,29,641,8,29,1,29,5,29,644,8,29,10,29,12,29,647,9,29,1,
+ 30,1,30,1,30,1,30,1,31,1,31,1,31,1,31,1,32,1,32,3,32,659,8,32,1,
+ 32,3,32,662,8,32,1,33,1,33,1,33,1,33,1,33,1,33,1,33,1,33,1,33,1,
+ 33,1,33,1,33,1,33,1,33,3,33,678,8,33,1,34,1,34,1,34,1,34,1,35,1,
+ 35,3,35,686,8,35,1,35,1,35,3,35,690,8,35,1,35,5,35,693,8,35,10,35,
+ 12,35,696,9,35,1,36,1,36,3,36,700,8,36,1,36,1,36,3,36,704,8,36,1,
+ 36,1,36,1,36,3,36,709,8,36,1,37,1,37,1,38,1,38,3,38,715,8,38,1,38,
+ 5,38,718,8,38,10,38,12,38,721,9,38,1,38,1,38,1,38,1,38,3,38,727,
+ 8,38,1,39,1,39,3,39,731,8,39,1,39,1,39,3,39,735,8,39,3,39,737,8,
+ 39,1,39,1,39,3,39,741,8,39,3,39,743,8,39,1,39,1,39,3,39,747,8,39,
+ 3,39,749,8,39,1,39,1,39,1,40,1,40,3,40,755,8,40,1,40,1,40,1,41,1,
+ 41,3,41,761,8,41,1,41,1,41,3,41,765,8,41,1,41,3,41,768,8,41,1,41,
+ 3,41,771,8,41,1,41,1,41,3,41,775,8,41,1,41,1,41,1,41,1,41,3,41,781,
+ 8,41,1,41,1,41,3,41,785,8,41,1,41,3,41,788,8,41,1,41,3,41,791,8,
+ 41,1,41,1,41,1,41,1,41,3,41,797,8,41,1,41,3,41,800,8,41,1,41,3,41,
+ 803,8,41,1,41,1,41,3,41,807,8,41,1,41,1,41,1,41,1,41,3,41,813,8,
+ 41,1,41,3,41,816,8,41,1,41,3,41,819,8,41,1,41,1,41,3,41,823,8,41,
+ 1,42,1,42,3,42,827,8,42,1,42,1,42,3,42,831,8,42,3,42,833,8,42,1,
+ 42,1,42,3,42,837,8,42,3,42,839,8,42,1,42,3,42,842,8,42,1,42,1,42,
+ 3,42,846,8,42,3,42,848,8,42,1,42,1,42,1,43,1,43,3,43,854,8,43,1,
+ 44,1,44,3,44,858,8,44,1,44,1,44,3,44,862,8,44,1,44,1,44,3,44,866,
+ 8,44,1,44,3,44,869,8,44,1,44,5,44,872,8,44,10,44,12,44,875,9,44,
+ 1,45,1,45,3,45,879,8,45,1,45,5,45,882,8,45,10,45,12,45,885,9,45,
+ 1,46,1,46,3,46,889,8,46,1,46,1,46,1,47,1,47,3,47,895,8,47,1,47,1,
+ 47,3,47,899,8,47,3,47,901,8,47,1,47,1,47,3,47,905,8,47,1,47,1,47,
+ 3,47,909,8,47,3,47,911,8,47,3,47,913,8,47,1,48,1,48,1,49,1,49,1,
+ 50,1,50,1,51,1,51,1,51,1,51,1,51,5,51,926,8,51,10,51,12,51,929,9,
+ 51,1,52,1,52,1,52,1,52,1,52,5,52,936,8,52,10,52,12,52,939,9,52,1,
+ 53,1,53,1,53,1,53,1,53,5,53,946,8,53,10,53,12,53,949,9,53,1,54,1,
+ 54,3,54,953,8,54,5,54,955,8,54,10,54,12,54,958,9,54,1,54,1,54,1,
+ 55,1,55,3,55,964,8,55,1,55,5,55,967,8,55,10,55,12,55,970,9,55,1,
+ 56,1,56,3,56,974,8,56,1,56,1,56,3,56,978,8,56,1,56,1,56,3,56,982,
+ 8,56,1,56,1,56,3,56,986,8,56,1,56,5,56,989,8,56,10,56,12,56,992,
+ 9,56,1,57,1,57,3,57,996,8,57,1,57,1,57,3,57,1000,8,57,1,57,1,57,
+ 3,57,1004,8,57,1,57,1,57,3,57,1008,8,57,1,57,1,57,3,57,1012,8,57,
+ 1,57,1,57,3,57,1016,8,57,1,57,5,57,1019,8,57,10,57,12,57,1022,9,
+ 57,1,58,1,58,3,58,1026,8,58,1,58,1,58,3,58,1030,8,58,1,58,5,58,1033,
+ 8,58,10,58,12,58,1036,9,58,1,59,1,59,3,59,1040,8,59,5,59,1042,8,
+ 59,10,59,12,59,1045,9,59,1,59,1,59,1,60,1,60,1,60,1,60,5,60,1053,
+ 8,60,10,60,12,60,1056,9,60,1,61,1,61,1,61,3,61,1061,8,61,1,61,1,
+ 61,3,61,1065,8,61,1,61,1,61,1,61,1,61,1,61,3,61,1072,8,61,1,61,1,
+ 61,3,61,1076,8,61,1,61,1,61,3,61,1080,8,61,1,61,3,61,1083,8,61,1,
+ 62,1,62,1,62,1,62,1,62,1,62,1,62,1,62,1,62,1,62,1,62,1,62,3,62,1097,
+ 8,62,1,62,3,62,1100,8,62,1,62,1,62,1,63,1,63,1,63,1,63,1,63,1,63,
+ 1,63,1,63,1,63,1,63,3,63,1114,8,63,1,64,1,64,3,64,1118,8,64,1,64,
+ 5,64,1121,8,64,10,64,12,64,1124,9,64,1,64,3,64,1127,8,64,1,64,3,
+ 64,1130,8,64,1,65,1,65,1,65,1,65,1,65,3,65,1137,8,65,1,65,1,65,3,
+ 65,1141,8,65,1,65,1,65,3,65,1145,8,65,1,65,1,65,1,65,1,65,1,65,3,
+ 65,1152,8,65,1,65,1,65,3,65,1156,8,65,1,65,1,65,3,65,1160,8,65,1,
+ 65,1,65,1,65,1,65,3,65,1166,8,65,1,65,1,65,3,65,1170,8,65,1,65,1,
+ 65,3,65,1174,8,65,1,65,1,65,1,65,1,65,3,65,1180,8,65,1,65,1,65,3,
+ 65,1184,8,65,1,65,1,65,3,65,1188,8,65,1,65,1,65,1,65,1,65,3,65,1194,
+ 8,65,1,65,1,65,3,65,1198,8,65,1,65,1,65,3,65,1202,8,65,1,65,1,65,
+ 1,65,1,65,1,65,1,65,3,65,1210,8,65,1,66,1,66,1,66,1,66,1,66,1,66,
+ 3,66,1218,8,66,1,67,1,67,1,68,1,68,3,68,1224,8,68,1,68,1,68,3,68,
+ 1228,8,68,1,68,1,68,3,68,1232,8,68,1,68,1,68,3,68,1236,8,68,5,68,
+ 1238,8,68,10,68,12,68,1241,9,68,3,68,1243,8,68,1,68,1,68,1,69,1,
+ 69,3,69,1249,8,69,1,69,1,69,1,69,3,69,1254,8,69,1,69,1,69,1,69,3,
+ 69,1259,8,69,1,69,1,69,1,69,3,69,1264,8,69,1,69,1,69,1,69,3,69,1269,
+ 8,69,1,69,1,69,1,69,3,69,1274,8,69,1,69,3,69,1277,8,69,1,70,1,70,
+ 3,70,1281,8,70,1,70,1,70,3,70,1285,8,70,1,70,1,70,1,71,1,71,3,71,
+ 1291,8,71,1,71,4,71,1294,8,71,11,71,12,71,1295,1,72,1,72,3,72,1300,
+ 8,72,1,72,3,72,1303,8,72,1,73,1,73,1,73,1,73,1,73,1,73,1,74,1,74,
+ 3,74,1313,8,74,1,74,1,74,3,74,1317,8,74,1,74,1,74,3,74,1321,8,74,
+ 3,74,1323,8,74,1,74,1,74,3,74,1327,8,74,1,74,1,74,3,74,1331,8,74,
+ 1,74,1,74,3,74,1335,8,74,5,74,1337,8,74,10,74,12,74,1340,9,74,3,
+ 74,1342,8,74,1,74,1,74,1,75,1,75,1,75,1,75,3,75,1350,8,75,1,76,1,
+ 76,3,76,1354,8,76,1,76,1,76,3,76,1358,8,76,1,76,1,76,3,76,1362,8,
+ 76,1,76,1,76,3,76,1366,8,76,1,76,1,76,3,76,1370,8,76,5,76,1372,8,
+ 76,10,76,12,76,1375,9,76,3,76,1377,8,76,1,76,1,76,1,77,1,77,1,78,
+ 1,78,1,79,1,79,1,79,1,80,1,80,1,80,5,80,1391,8,80,10,80,12,80,1394,
+ 9,80,1,81,1,81,3,81,1398,8,81,1,81,1,81,3,81,1402,8,81,1,81,1,81,
+ 3,81,1406,8,81,1,81,3,81,1409,8,81,1,81,3,81,1412,8,81,1,81,1,81,
+ 1,82,1,82,3,82,1418,8,82,1,82,1,82,3,82,1422,8,82,1,82,1,82,3,82,
+ 1426,8,82,3,82,1428,8,82,1,82,1,82,3,82,1432,8,82,1,82,1,82,3,82,
+ 1436,8,82,1,82,1,82,3,82,1440,8,82,3,82,1442,8,82,1,82,1,82,3,82,
+ 1446,8,82,1,82,1,82,3,82,1450,8,82,1,82,1,82,1,83,1,83,3,83,1456,
+ 8,83,1,83,1,83,1,84,1,84,3,84,1462,8,84,1,84,4,84,1465,8,84,11,84,
+ 12,84,1466,1,84,1,84,3,84,1471,8,84,1,84,1,84,3,84,1475,8,84,1,84,
+ 4,84,1478,8,84,11,84,12,84,1479,3,84,1482,8,84,1,84,3,84,1485,8,
+ 84,1,84,1,84,3,84,1489,8,84,1,84,3,84,1492,8,84,1,84,3,84,1495,8,
+ 84,1,84,1,84,1,85,1,85,3,85,1501,8,85,1,85,1,85,3,85,1505,8,85,1,
+ 85,1,85,3,85,1509,8,85,1,85,1,85,1,86,1,86,1,87,1,87,3,87,1517,8,
+ 87,1,88,1,88,3,88,1521,8,88,1,88,1,88,3,88,1525,8,88,1,88,1,88,3,
+ 88,1529,8,88,1,88,1,88,3,88,1533,8,88,1,88,1,88,3,88,1537,8,88,1,
+ 88,1,88,3,88,1541,8,88,1,88,1,88,3,88,1545,8,88,1,88,1,88,3,88,1549,
+ 8,88,5,88,1551,8,88,10,88,12,88,1554,9,88,3,88,1556,8,88,1,88,1,
+ 88,1,89,1,89,1,89,3,89,1563,8,89,1,90,1,90,3,90,1567,8,90,1,90,4,
+ 90,1570,8,90,11,90,12,90,1571,1,91,1,91,1,92,1,92,1,93,1,93,1,94,
+ 1,94,3,94,1582,8,94,1,95,1,95,1,96,1,96,1,97,1,97,1,98,1,98,1,99,
+ 1,99,1,99,0,0,100,0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,
+ 34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,
+ 78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,
+ 116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,
+ 148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,
+ 180,182,184,186,188,190,192,194,196,198,0,10,1,0,70,73,1,0,13,14,
+ 1,0,93,94,1,0,103,105,1,0,113,114,4,0,89,92,106,106,115,117,130,
+ 130,6,0,48,60,63,73,77,85,87,88,93,100,118,127,2,0,19,19,27,30,2,
+ 0,20,20,31,34,2,0,14,14,35,45,1820,0,201,1,0,0,0,2,226,1,0,0,0,4,
+ 230,1,0,0,0,6,232,1,0,0,0,8,254,1,0,0,0,10,258,1,0,0,0,12,295,1,
+ 0,0,0,14,319,1,0,0,0,16,330,1,0,0,0,18,335,1,0,0,0,20,339,1,0,0,
+ 0,22,361,1,0,0,0,24,371,1,0,0,0,26,393,1,0,0,0,28,395,1,0,0,0,30,
+ 401,1,0,0,0,32,455,1,0,0,0,34,459,1,0,0,0,36,479,1,0,0,0,38,499,
+ 1,0,0,0,40,501,1,0,0,0,42,512,1,0,0,0,44,539,1,0,0,0,46,552,1,0,
+ 0,0,48,556,1,0,0,0,50,571,1,0,0,0,52,581,1,0,0,0,54,622,1,0,0,0,
+ 56,631,1,0,0,0,58,633,1,0,0,0,60,648,1,0,0,0,62,652,1,0,0,0,64,656,
+ 1,0,0,0,66,677,1,0,0,0,68,679,1,0,0,0,70,683,1,0,0,0,72,708,1,0,
+ 0,0,74,710,1,0,0,0,76,726,1,0,0,0,78,728,1,0,0,0,80,752,1,0,0,0,
+ 82,822,1,0,0,0,84,824,1,0,0,0,86,853,1,0,0,0,88,855,1,0,0,0,90,876,
+ 1,0,0,0,92,886,1,0,0,0,94,892,1,0,0,0,96,914,1,0,0,0,98,916,1,0,
+ 0,0,100,918,1,0,0,0,102,920,1,0,0,0,104,930,1,0,0,0,106,940,1,0,
+ 0,0,108,956,1,0,0,0,110,961,1,0,0,0,112,971,1,0,0,0,114,993,1,0,
+ 0,0,116,1023,1,0,0,0,118,1043,1,0,0,0,120,1048,1,0,0,0,122,1082,
+ 1,0,0,0,124,1096,1,0,0,0,126,1113,1,0,0,0,128,1115,1,0,0,0,130,1209,
+ 1,0,0,0,132,1217,1,0,0,0,134,1219,1,0,0,0,136,1221,1,0,0,0,138,1276,
+ 1,0,0,0,140,1278,1,0,0,0,142,1288,1,0,0,0,144,1297,1,0,0,0,146,1304,
+ 1,0,0,0,148,1310,1,0,0,0,150,1349,1,0,0,0,152,1351,1,0,0,0,154,1380,
+ 1,0,0,0,156,1382,1,0,0,0,158,1384,1,0,0,0,160,1392,1,0,0,0,162,1395,
+ 1,0,0,0,164,1415,1,0,0,0,166,1453,1,0,0,0,168,1481,1,0,0,0,170,1498,
+ 1,0,0,0,172,1512,1,0,0,0,174,1516,1,0,0,0,176,1518,1,0,0,0,178,1559,
+ 1,0,0,0,180,1564,1,0,0,0,182,1573,1,0,0,0,184,1575,1,0,0,0,186,1577,
+ 1,0,0,0,188,1581,1,0,0,0,190,1583,1,0,0,0,192,1585,1,0,0,0,194,1587,
+ 1,0,0,0,196,1589,1,0,0,0,198,1591,1,0,0,0,200,202,5,131,0,0,201,
+ 200,1,0,0,0,201,202,1,0,0,0,202,203,1,0,0,0,203,208,3,2,1,0,204,
+ 206,5,131,0,0,205,204,1,0,0,0,205,206,1,0,0,0,206,207,1,0,0,0,207,
+ 209,5,1,0,0,208,205,1,0,0,0,208,209,1,0,0,0,209,211,1,0,0,0,210,
+ 212,5,131,0,0,211,210,1,0,0,0,211,212,1,0,0,0,212,213,1,0,0,0,213,
+ 214,5,0,0,1,214,1,1,0,0,0,215,227,3,4,2,0,216,218,5,46,0,0,217,219,
+ 5,131,0,0,218,217,1,0,0,0,218,219,1,0,0,0,219,220,1,0,0,0,220,227,
+ 3,4,2,0,221,223,5,47,0,0,222,224,5,131,0,0,223,222,1,0,0,0,223,224,
+ 1,0,0,0,224,225,1,0,0,0,225,227,3,4,2,0,226,215,1,0,0,0,226,216,
+ 1,0,0,0,226,221,1,0,0,0,227,3,1,0,0,0,228,231,3,6,3,0,229,231,3,
+ 42,21,0,230,228,1,0,0,0,230,229,1,0,0,0,231,5,1,0,0,0,232,239,3,
+ 10,5,0,233,235,5,131,0,0,234,233,1,0,0,0,234,235,1,0,0,0,235,236,
+ 1,0,0,0,236,238,3,8,4,0,237,234,1,0,0,0,238,241,1,0,0,0,239,237,
+ 1,0,0,0,239,240,1,0,0,0,240,7,1,0,0,0,241,239,1,0,0,0,242,243,5,
+ 48,0,0,243,244,5,131,0,0,244,246,5,49,0,0,245,247,5,131,0,0,246,
+ 245,1,0,0,0,246,247,1,0,0,0,247,248,1,0,0,0,248,255,3,10,5,0,249,
+ 251,5,48,0,0,250,252,5,131,0,0,251,250,1,0,0,0,251,252,1,0,0,0,252,
+ 253,1,0,0,0,253,255,3,10,5,0,254,242,1,0,0,0,254,249,1,0,0,0,255,
+ 9,1,0,0,0,256,259,3,12,6,0,257,259,3,14,7,0,258,256,1,0,0,0,258,
+ 257,1,0,0,0,259,11,1,0,0,0,260,262,3,18,9,0,261,263,5,131,0,0,262,
+ 261,1,0,0,0,262,263,1,0,0,0,263,265,1,0,0,0,264,260,1,0,0,0,265,
+ 268,1,0,0,0,266,264,1,0,0,0,266,267,1,0,0,0,267,269,1,0,0,0,268,
+ 266,1,0,0,0,269,296,3,50,25,0,270,272,3,18,9,0,271,273,5,131,0,0,
+ 272,271,1,0,0,0,272,273,1,0,0,0,273,275,1,0,0,0,274,270,1,0,0,0,
+ 275,278,1,0,0,0,276,274,1,0,0,0,276,277,1,0,0,0,277,279,1,0,0,0,
+ 278,276,1,0,0,0,279,286,3,16,8,0,280,282,5,131,0,0,281,280,1,0,0,
+ 0,281,282,1,0,0,0,282,283,1,0,0,0,283,285,3,16,8,0,284,281,1,0,0,
+ 0,285,288,1,0,0,0,286,284,1,0,0,0,286,287,1,0,0,0,287,293,1,0,0,
+ 0,288,286,1,0,0,0,289,291,5,131,0,0,290,289,1,0,0,0,290,291,1,0,
+ 0,0,291,292,1,0,0,0,292,294,3,50,25,0,293,290,1,0,0,0,293,294,1,
+ 0,0,0,294,296,1,0,0,0,295,266,1,0,0,0,295,276,1,0,0,0,296,13,1,0,
+ 0,0,297,299,3,18,9,0,298,300,5,131,0,0,299,298,1,0,0,0,299,300,1,
+ 0,0,0,300,302,1,0,0,0,301,297,1,0,0,0,302,305,1,0,0,0,303,301,1,
+ 0,0,0,303,304,1,0,0,0,304,312,1,0,0,0,305,303,1,0,0,0,306,308,3,
+ 16,8,0,307,309,5,131,0,0,308,307,1,0,0,0,308,309,1,0,0,0,309,311,
+ 1,0,0,0,310,306,1,0,0,0,311,314,1,0,0,0,312,310,1,0,0,0,312,313,
+ 1,0,0,0,313,315,1,0,0,0,314,312,1,0,0,0,315,317,3,48,24,0,316,318,
+ 5,131,0,0,317,316,1,0,0,0,317,318,1,0,0,0,318,320,1,0,0,0,319,303,
+ 1,0,0,0,320,321,1,0,0,0,321,319,1,0,0,0,321,322,1,0,0,0,322,323,
+ 1,0,0,0,323,324,3,12,6,0,324,15,1,0,0,0,325,331,3,28,14,0,326,331,
+ 3,24,12,0,327,331,3,34,17,0,328,331,3,30,15,0,329,331,3,36,18,0,
+ 330,325,1,0,0,0,330,326,1,0,0,0,330,327,1,0,0,0,330,328,1,0,0,0,
+ 330,329,1,0,0,0,331,17,1,0,0,0,332,336,3,20,10,0,333,336,3,22,11,
+ 0,334,336,3,40,20,0,335,332,1,0,0,0,335,333,1,0,0,0,335,334,1,0,
+ 0,0,336,19,1,0,0,0,337,338,5,50,0,0,338,340,5,131,0,0,339,337,1,
+ 0,0,0,339,340,1,0,0,0,340,341,1,0,0,0,341,343,5,51,0,0,342,344,5,
+ 131,0,0,343,342,1,0,0,0,343,344,1,0,0,0,344,345,1,0,0,0,345,352,
+ 3,70,35,0,346,348,5,131,0,0,347,346,1,0,0,0,347,348,1,0,0,0,348,
+ 349,1,0,0,0,349,351,3,66,33,0,350,347,1,0,0,0,351,354,1,0,0,0,352,
+ 350,1,0,0,0,352,353,1,0,0,0,353,359,1,0,0,0,354,352,1,0,0,0,355,
+ 357,5,131,0,0,356,355,1,0,0,0,356,357,1,0,0,0,357,358,1,0,0,0,358,
+ 360,3,68,34,0,359,356,1,0,0,0,359,360,1,0,0,0,360,21,1,0,0,0,361,
+ 363,5,52,0,0,362,364,5,131,0,0,363,362,1,0,0,0,363,364,1,0,0,0,364,
+ 365,1,0,0,0,365,366,3,100,50,0,366,367,5,131,0,0,367,368,5,53,0,
+ 0,368,369,5,131,0,0,369,370,3,172,86,0,370,23,1,0,0,0,371,373,5,
+ 54,0,0,372,374,5,131,0,0,373,372,1,0,0,0,373,374,1,0,0,0,374,375,
+ 1,0,0,0,375,380,3,72,36,0,376,377,5,131,0,0,377,379,3,26,13,0,378,
+ 376,1,0,0,0,379,382,1,0,0,0,380,378,1,0,0,0,380,381,1,0,0,0,381,
+ 25,1,0,0,0,382,380,1,0,0,0,383,384,5,55,0,0,384,385,5,131,0,0,385,
+ 386,5,51,0,0,386,387,5,131,0,0,387,394,3,30,15,0,388,389,5,55,0,
+ 0,389,390,5,131,0,0,390,391,5,56,0,0,391,392,5,131,0,0,392,394,3,
+ 30,15,0,393,383,1,0,0,0,393,388,1,0,0,0,394,27,1,0,0,0,395,397,5,
+ 56,0,0,396,398,5,131,0,0,397,396,1,0,0,0,397,398,1,0,0,0,398,399,
+ 1,0,0,0,399,400,3,70,35,0,400,29,1,0,0,0,401,403,5,57,0,0,402,404,
+ 5,131,0,0,403,402,1,0,0,0,403,404,1,0,0,0,404,405,1,0,0,0,405,416,
+ 3,32,16,0,406,408,5,131,0,0,407,406,1,0,0,0,407,408,1,0,0,0,408,
+ 409,1,0,0,0,409,411,5,2,0,0,410,412,5,131,0,0,411,410,1,0,0,0,411,
+ 412,1,0,0,0,412,413,1,0,0,0,413,415,3,32,16,0,414,407,1,0,0,0,415,
+ 418,1,0,0,0,416,414,1,0,0,0,416,417,1,0,0,0,417,31,1,0,0,0,418,416,
+ 1,0,0,0,419,421,3,180,90,0,420,422,5,131,0,0,421,420,1,0,0,0,421,
+ 422,1,0,0,0,422,423,1,0,0,0,423,425,5,3,0,0,424,426,5,131,0,0,425,
+ 424,1,0,0,0,425,426,1,0,0,0,426,427,1,0,0,0,427,428,3,100,50,0,428,
+ 456,1,0,0,0,429,431,3,172,86,0,430,432,5,131,0,0,431,430,1,0,0,0,
+ 431,432,1,0,0,0,432,433,1,0,0,0,433,435,5,3,0,0,434,436,5,131,0,
+ 0,435,434,1,0,0,0,435,436,1,0,0,0,436,437,1,0,0,0,437,438,3,100,
+ 50,0,438,456,1,0,0,0,439,441,3,172,86,0,440,442,5,131,0,0,441,440,
+ 1,0,0,0,441,442,1,0,0,0,442,443,1,0,0,0,443,445,5,4,0,0,444,446,
+ 5,131,0,0,445,444,1,0,0,0,445,446,1,0,0,0,446,447,1,0,0,0,447,448,
+ 3,100,50,0,448,456,1,0,0,0,449,451,3,172,86,0,450,452,5,131,0,0,
+ 451,450,1,0,0,0,451,452,1,0,0,0,452,453,1,0,0,0,453,454,3,90,45,
+ 0,454,456,1,0,0,0,455,419,1,0,0,0,455,429,1,0,0,0,455,439,1,0,0,
+ 0,455,449,1,0,0,0,456,33,1,0,0,0,457,458,5,58,0,0,458,460,5,131,
+ 0,0,459,457,1,0,0,0,459,460,1,0,0,0,460,461,1,0,0,0,461,463,5,59,
+ 0,0,462,464,5,131,0,0,463,462,1,0,0,0,463,464,1,0,0,0,464,465,1,
+ 0,0,0,465,476,3,100,50,0,466,468,5,131,0,0,467,466,1,0,0,0,467,468,
+ 1,0,0,0,468,469,1,0,0,0,469,471,5,2,0,0,470,472,5,131,0,0,471,470,
+ 1,0,0,0,471,472,1,0,0,0,472,473,1,0,0,0,473,475,3,100,50,0,474,467,
+ 1,0,0,0,475,478,1,0,0,0,476,474,1,0,0,0,476,477,1,0,0,0,477,35,1,
+ 0,0,0,478,476,1,0,0,0,479,480,5,60,0,0,480,481,5,131,0,0,481,492,
+ 3,38,19,0,482,484,5,131,0,0,483,482,1,0,0,0,483,484,1,0,0,0,484,
+ 485,1,0,0,0,485,487,5,2,0,0,486,488,5,131,0,0,487,486,1,0,0,0,487,
+ 488,1,0,0,0,488,489,1,0,0,0,489,491,3,38,19,0,490,483,1,0,0,0,491,
+ 494,1,0,0,0,492,490,1,0,0,0,492,493,1,0,0,0,493,37,1,0,0,0,494,492,
+ 1,0,0,0,495,496,3,172,86,0,496,497,3,90,45,0,497,500,1,0,0,0,498,
+ 500,3,180,90,0,499,495,1,0,0,0,499,498,1,0,0,0,500,39,1,0,0,0,501,
+ 502,5,61,0,0,502,503,5,131,0,0,503,510,3,152,76,0,504,506,5,131,
+ 0,0,505,504,1,0,0,0,505,506,1,0,0,0,506,507,1,0,0,0,507,508,5,62,
+ 0,0,508,509,5,131,0,0,509,511,3,44,22,0,510,505,1,0,0,0,510,511,
+ 1,0,0,0,511,41,1,0,0,0,512,513,5,61,0,0,513,516,5,131,0,0,514,517,
+ 3,152,76,0,515,517,3,154,77,0,516,514,1,0,0,0,516,515,1,0,0,0,517,
+ 522,1,0,0,0,518,519,5,131,0,0,519,520,5,62,0,0,520,521,5,131,0,0,
+ 521,523,3,44,22,0,522,518,1,0,0,0,522,523,1,0,0,0,523,43,1,0,0,0,
+ 524,540,5,5,0,0,525,536,3,46,23,0,526,528,5,131,0,0,527,526,1,0,
+ 0,0,527,528,1,0,0,0,528,529,1,0,0,0,529,531,5,2,0,0,530,532,5,131,
+ 0,0,531,530,1,0,0,0,531,532,1,0,0,0,532,533,1,0,0,0,533,535,3,46,
+ 23,0,534,527,1,0,0,0,535,538,1,0,0,0,536,534,1,0,0,0,536,537,1,0,
+ 0,0,537,540,1,0,0,0,538,536,1,0,0,0,539,524,1,0,0,0,539,525,1,0,
+ 0,0,540,545,1,0,0,0,541,543,5,131,0,0,542,541,1,0,0,0,542,543,1,
+ 0,0,0,543,544,1,0,0,0,544,546,3,68,34,0,545,542,1,0,0,0,545,546,
+ 1,0,0,0,546,45,1,0,0,0,547,548,3,156,78,0,548,549,5,131,0,0,549,
+ 550,5,53,0,0,550,551,5,131,0,0,551,553,1,0,0,0,552,547,1,0,0,0,552,
+ 553,1,0,0,0,553,554,1,0,0,0,554,555,3,172,86,0,555,47,1,0,0,0,556,
+ 561,5,63,0,0,557,559,5,131,0,0,558,557,1,0,0,0,558,559,1,0,0,0,559,
+ 560,1,0,0,0,560,562,5,64,0,0,561,558,1,0,0,0,561,562,1,0,0,0,562,
+ 563,1,0,0,0,563,564,5,131,0,0,564,569,3,52,26,0,565,567,5,131,0,
+ 0,566,565,1,0,0,0,566,567,1,0,0,0,567,568,1,0,0,0,568,570,3,68,34,
+ 0,569,566,1,0,0,0,569,570,1,0,0,0,570,49,1,0,0,0,571,576,5,65,0,
+ 0,572,574,5,131,0,0,573,572,1,0,0,0,573,574,1,0,0,0,574,575,1,0,
+ 0,0,575,577,5,64,0,0,576,573,1,0,0,0,576,577,1,0,0,0,577,578,1,0,
+ 0,0,578,579,5,131,0,0,579,580,3,52,26,0,580,51,1,0,0,0,581,584,3,
+ 54,27,0,582,583,5,131,0,0,583,585,3,58,29,0,584,582,1,0,0,0,584,
+ 585,1,0,0,0,585,588,1,0,0,0,586,587,5,131,0,0,587,589,3,60,30,0,
+ 588,586,1,0,0,0,588,589,1,0,0,0,589,592,1,0,0,0,590,591,5,131,0,
+ 0,591,593,3,62,31,0,592,590,1,0,0,0,592,593,1,0,0,0,593,53,1,0,0,
+ 0,594,605,5,5,0,0,595,597,5,131,0,0,596,595,1,0,0,0,596,597,1,0,
+ 0,0,597,598,1,0,0,0,598,600,5,2,0,0,599,601,5,131,0,0,600,599,1,
+ 0,0,0,600,601,1,0,0,0,601,602,1,0,0,0,602,604,3,56,28,0,603,596,
+ 1,0,0,0,604,607,1,0,0,0,605,603,1,0,0,0,605,606,1,0,0,0,606,623,
+ 1,0,0,0,607,605,1,0,0,0,608,619,3,56,28,0,609,611,5,131,0,0,610,
+ 609,1,0,0,0,610,611,1,0,0,0,611,612,1,0,0,0,612,614,5,2,0,0,613,
+ 615,5,131,0,0,614,613,1,0,0,0,614,615,1,0,0,0,615,616,1,0,0,0,616,
+ 618,3,56,28,0,617,610,1,0,0,0,618,621,1,0,0,0,619,617,1,0,0,0,619,
+ 620,1,0,0,0,620,623,1,0,0,0,621,619,1,0,0,0,622,594,1,0,0,0,622,
+ 608,1,0,0,0,623,55,1,0,0,0,624,625,3,100,50,0,625,626,5,131,0,0,
+ 626,627,5,53,0,0,627,628,5,131,0,0,628,629,3,172,86,0,629,632,1,
+ 0,0,0,630,632,3,100,50,0,631,624,1,0,0,0,631,630,1,0,0,0,632,57,
+ 1,0,0,0,633,634,5,66,0,0,634,635,5,131,0,0,635,636,5,67,0,0,636,
+ 637,5,131,0,0,637,645,3,64,32,0,638,640,5,2,0,0,639,641,5,131,0,
+ 0,640,639,1,0,0,0,640,641,1,0,0,0,641,642,1,0,0,0,642,644,3,64,32,
+ 0,643,638,1,0,0,0,644,647,1,0,0,0,645,643,1,0,0,0,645,646,1,0,0,
+ 0,646,59,1,0,0,0,647,645,1,0,0,0,648,649,5,68,0,0,649,650,5,131,
+ 0,0,650,651,3,100,50,0,651,61,1,0,0,0,652,653,5,69,0,0,653,654,5,
+ 131,0,0,654,655,3,100,50,0,655,63,1,0,0,0,656,661,3,100,50,0,657,
+ 659,5,131,0,0,658,657,1,0,0,0,658,659,1,0,0,0,659,660,1,0,0,0,660,
+ 662,7,0,0,0,661,658,1,0,0,0,661,662,1,0,0,0,662,65,1,0,0,0,663,664,
+ 5,74,0,0,664,665,5,131,0,0,665,666,5,75,0,0,666,667,5,131,0,0,667,
+ 668,5,55,0,0,668,669,5,131,0,0,669,678,3,172,86,0,670,671,5,74,0,
+ 0,671,672,5,131,0,0,672,673,5,76,0,0,673,674,5,131,0,0,674,675,5,
+ 55,0,0,675,676,5,131,0,0,676,678,3,172,86,0,677,663,1,0,0,0,677,
+ 670,1,0,0,0,678,67,1,0,0,0,679,680,5,77,0,0,680,681,5,131,0,0,681,
+ 682,3,100,50,0,682,69,1,0,0,0,683,694,3,72,36,0,684,686,5,131,0,
+ 0,685,684,1,0,0,0,685,686,1,0,0,0,686,687,1,0,0,0,687,689,5,2,0,
+ 0,688,690,5,131,0,0,689,688,1,0,0,0,689,690,1,0,0,0,690,691,1,0,
+ 0,0,691,693,3,72,36,0,692,685,1,0,0,0,693,696,1,0,0,0,694,692,1,
+ 0,0,0,694,695,1,0,0,0,695,71,1,0,0,0,696,694,1,0,0,0,697,699,3,172,
+ 86,0,698,700,5,131,0,0,699,698,1,0,0,0,699,700,1,0,0,0,700,701,1,
+ 0,0,0,701,703,5,3,0,0,702,704,5,131,0,0,703,702,1,0,0,0,703,704,
+ 1,0,0,0,704,705,1,0,0,0,705,706,3,74,37,0,706,709,1,0,0,0,707,709,
+ 3,74,37,0,708,697,1,0,0,0,708,707,1,0,0,0,709,73,1,0,0,0,710,711,
+ 3,76,38,0,711,75,1,0,0,0,712,719,3,78,39,0,713,715,5,131,0,0,714,
+ 713,1,0,0,0,714,715,1,0,0,0,715,716,1,0,0,0,716,718,3,80,40,0,717,
+ 714,1,0,0,0,718,721,1,0,0,0,719,717,1,0,0,0,719,720,1,0,0,0,720,
+ 727,1,0,0,0,721,719,1,0,0,0,722,723,5,6,0,0,723,724,3,76,38,0,724,
+ 725,5,7,0,0,725,727,1,0,0,0,726,712,1,0,0,0,726,722,1,0,0,0,727,
+ 77,1,0,0,0,728,730,5,6,0,0,729,731,5,131,0,0,730,729,1,0,0,0,730,
+ 731,1,0,0,0,731,736,1,0,0,0,732,734,3,172,86,0,733,735,5,131,0,0,
+ 734,733,1,0,0,0,734,735,1,0,0,0,735,737,1,0,0,0,736,732,1,0,0,0,
+ 736,737,1,0,0,0,737,742,1,0,0,0,738,740,3,90,45,0,739,741,5,131,
+ 0,0,740,739,1,0,0,0,740,741,1,0,0,0,741,743,1,0,0,0,742,738,1,0,
+ 0,0,742,743,1,0,0,0,743,748,1,0,0,0,744,746,3,86,43,0,745,747,5,
+ 131,0,0,746,745,1,0,0,0,746,747,1,0,0,0,747,749,1,0,0,0,748,744,
+ 1,0,0,0,748,749,1,0,0,0,749,750,1,0,0,0,750,751,5,7,0,0,751,79,1,
+ 0,0,0,752,754,3,82,41,0,753,755,5,131,0,0,754,753,1,0,0,0,754,755,
+ 1,0,0,0,755,756,1,0,0,0,756,757,3,78,39,0,757,81,1,0,0,0,758,760,
+ 3,194,97,0,759,761,5,131,0,0,760,759,1,0,0,0,760,761,1,0,0,0,761,
+ 762,1,0,0,0,762,764,3,198,99,0,763,765,5,131,0,0,764,763,1,0,0,0,
+ 764,765,1,0,0,0,765,767,1,0,0,0,766,768,3,84,42,0,767,766,1,0,0,
+ 0,767,768,1,0,0,0,768,770,1,0,0,0,769,771,5,131,0,0,770,769,1,0,
+ 0,0,770,771,1,0,0,0,771,772,1,0,0,0,772,774,3,198,99,0,773,775,5,
+ 131,0,0,774,773,1,0,0,0,774,775,1,0,0,0,775,776,1,0,0,0,776,777,
+ 3,196,98,0,777,823,1,0,0,0,778,780,3,194,97,0,779,781,5,131,0,0,
+ 780,779,1,0,0,0,780,781,1,0,0,0,781,782,1,0,0,0,782,784,3,198,99,
+ 0,783,785,5,131,0,0,784,783,1,0,0,0,784,785,1,0,0,0,785,787,1,0,
+ 0,0,786,788,3,84,42,0,787,786,1,0,0,0,787,788,1,0,0,0,788,790,1,
+ 0,0,0,789,791,5,131,0,0,790,789,1,0,0,0,790,791,1,0,0,0,791,792,
+ 1,0,0,0,792,793,3,198,99,0,793,823,1,0,0,0,794,796,3,198,99,0,795,
+ 797,5,131,0,0,796,795,1,0,0,0,796,797,1,0,0,0,797,799,1,0,0,0,798,
+ 800,3,84,42,0,799,798,1,0,0,0,799,800,1,0,0,0,800,802,1,0,0,0,801,
+ 803,5,131,0,0,802,801,1,0,0,0,802,803,1,0,0,0,803,804,1,0,0,0,804,
+ 806,3,198,99,0,805,807,5,131,0,0,806,805,1,0,0,0,806,807,1,0,0,0,
+ 807,808,1,0,0,0,808,809,3,196,98,0,809,823,1,0,0,0,810,812,3,198,
+ 99,0,811,813,5,131,0,0,812,811,1,0,0,0,812,813,1,0,0,0,813,815,1,
+ 0,0,0,814,816,3,84,42,0,815,814,1,0,0,0,815,816,1,0,0,0,816,818,
+ 1,0,0,0,817,819,5,131,0,0,818,817,1,0,0,0,818,819,1,0,0,0,819,820,
+ 1,0,0,0,820,821,3,198,99,0,821,823,1,0,0,0,822,758,1,0,0,0,822,778,
+ 1,0,0,0,822,794,1,0,0,0,822,810,1,0,0,0,823,83,1,0,0,0,824,826,5,
+ 8,0,0,825,827,5,131,0,0,826,825,1,0,0,0,826,827,1,0,0,0,827,832,
+ 1,0,0,0,828,830,3,172,86,0,829,831,5,131,0,0,830,829,1,0,0,0,830,
+ 831,1,0,0,0,831,833,1,0,0,0,832,828,1,0,0,0,832,833,1,0,0,0,833,
+ 838,1,0,0,0,834,836,3,88,44,0,835,837,5,131,0,0,836,835,1,0,0,0,
+ 836,837,1,0,0,0,837,839,1,0,0,0,838,834,1,0,0,0,838,839,1,0,0,0,
+ 839,841,1,0,0,0,840,842,3,94,47,0,841,840,1,0,0,0,841,842,1,0,0,
+ 0,842,847,1,0,0,0,843,845,3,86,43,0,844,846,5,131,0,0,845,844,1,
+ 0,0,0,845,846,1,0,0,0,846,848,1,0,0,0,847,843,1,0,0,0,847,848,1,
+ 0,0,0,848,849,1,0,0,0,849,850,5,9,0,0,850,85,1,0,0,0,851,854,3,176,
+ 88,0,852,854,3,178,89,0,853,851,1,0,0,0,853,852,1,0,0,0,854,87,1,
+ 0,0,0,855,857,5,10,0,0,856,858,5,131,0,0,857,856,1,0,0,0,857,858,
+ 1,0,0,0,858,859,1,0,0,0,859,873,3,98,49,0,860,862,5,131,0,0,861,
+ 860,1,0,0,0,861,862,1,0,0,0,862,863,1,0,0,0,863,865,5,11,0,0,864,
+ 866,5,10,0,0,865,864,1,0,0,0,865,866,1,0,0,0,866,868,1,0,0,0,867,
+ 869,5,131,0,0,868,867,1,0,0,0,868,869,1,0,0,0,869,870,1,0,0,0,870,
+ 872,3,98,49,0,871,861,1,0,0,0,872,875,1,0,0,0,873,871,1,0,0,0,873,
+ 874,1,0,0,0,874,89,1,0,0,0,875,873,1,0,0,0,876,883,3,92,46,0,877,
+ 879,5,131,0,0,878,877,1,0,0,0,878,879,1,0,0,0,879,880,1,0,0,0,880,
+ 882,3,92,46,0,881,878,1,0,0,0,882,885,1,0,0,0,883,881,1,0,0,0,883,
+ 884,1,0,0,0,884,91,1,0,0,0,885,883,1,0,0,0,886,888,5,10,0,0,887,
+ 889,5,131,0,0,888,887,1,0,0,0,888,889,1,0,0,0,889,890,1,0,0,0,890,
+ 891,3,96,48,0,891,93,1,0,0,0,892,894,5,5,0,0,893,895,5,131,0,0,894,
+ 893,1,0,0,0,894,895,1,0,0,0,895,900,1,0,0,0,896,898,3,184,92,0,897,
+ 899,5,131,0,0,898,897,1,0,0,0,898,899,1,0,0,0,899,901,1,0,0,0,900,
+ 896,1,0,0,0,900,901,1,0,0,0,901,912,1,0,0,0,902,904,5,12,0,0,903,
+ 905,5,131,0,0,904,903,1,0,0,0,904,905,1,0,0,0,905,910,1,0,0,0,906,
+ 908,3,184,92,0,907,909,5,131,0,0,908,907,1,0,0,0,908,909,1,0,0,0,
+ 909,911,1,0,0,0,910,906,1,0,0,0,910,911,1,0,0,0,911,913,1,0,0,0,
+ 912,902,1,0,0,0,912,913,1,0,0,0,913,95,1,0,0,0,914,915,3,188,94,
+ 0,915,97,1,0,0,0,916,917,3,188,94,0,917,99,1,0,0,0,918,919,3,102,
+ 51,0,919,101,1,0,0,0,920,927,3,104,52,0,921,922,5,131,0,0,922,923,
+ 5,78,0,0,923,924,5,131,0,0,924,926,3,104,52,0,925,921,1,0,0,0,926,
+ 929,1,0,0,0,927,925,1,0,0,0,927,928,1,0,0,0,928,103,1,0,0,0,929,
+ 927,1,0,0,0,930,937,3,106,53,0,931,932,5,131,0,0,932,933,5,79,0,
+ 0,933,934,5,131,0,0,934,936,3,106,53,0,935,931,1,0,0,0,936,939,1,
+ 0,0,0,937,935,1,0,0,0,937,938,1,0,0,0,938,105,1,0,0,0,939,937,1,
+ 0,0,0,940,947,3,108,54,0,941,942,5,131,0,0,942,943,5,80,0,0,943,
+ 944,5,131,0,0,944,946,3,108,54,0,945,941,1,0,0,0,946,949,1,0,0,0,
+ 947,945,1,0,0,0,947,948,1,0,0,0,948,107,1,0,0,0,949,947,1,0,0,0,
+ 950,952,5,81,0,0,951,953,5,131,0,0,952,951,1,0,0,0,952,953,1,0,0,
+ 0,953,955,1,0,0,0,954,950,1,0,0,0,955,958,1,0,0,0,956,954,1,0,0,
+ 0,956,957,1,0,0,0,957,959,1,0,0,0,958,956,1,0,0,0,959,960,3,110,
+ 55,0,960,109,1,0,0,0,961,968,3,112,56,0,962,964,5,131,0,0,963,962,
+ 1,0,0,0,963,964,1,0,0,0,964,965,1,0,0,0,965,967,3,138,69,0,966,963,
+ 1,0,0,0,967,970,1,0,0,0,968,966,1,0,0,0,968,969,1,0,0,0,969,111,
+ 1,0,0,0,970,968,1,0,0,0,971,990,3,114,57,0,972,974,5,131,0,0,973,
+ 972,1,0,0,0,973,974,1,0,0,0,974,975,1,0,0,0,975,977,5,13,0,0,976,
+ 978,5,131,0,0,977,976,1,0,0,0,977,978,1,0,0,0,978,979,1,0,0,0,979,
+ 989,3,114,57,0,980,982,5,131,0,0,981,980,1,0,0,0,981,982,1,0,0,0,
+ 982,983,1,0,0,0,983,985,5,14,0,0,984,986,5,131,0,0,985,984,1,0,0,
+ 0,985,986,1,0,0,0,986,987,1,0,0,0,987,989,3,114,57,0,988,973,1,0,
+ 0,0,988,981,1,0,0,0,989,992,1,0,0,0,990,988,1,0,0,0,990,991,1,0,
+ 0,0,991,113,1,0,0,0,992,990,1,0,0,0,993,1020,3,116,58,0,994,996,
+ 5,131,0,0,995,994,1,0,0,0,995,996,1,0,0,0,996,997,1,0,0,0,997,999,
+ 5,5,0,0,998,1000,5,131,0,0,999,998,1,0,0,0,999,1000,1,0,0,0,1000,
+ 1001,1,0,0,0,1001,1019,3,116,58,0,1002,1004,5,131,0,0,1003,1002,
+ 1,0,0,0,1003,1004,1,0,0,0,1004,1005,1,0,0,0,1005,1007,5,15,0,0,1006,
+ 1008,5,131,0,0,1007,1006,1,0,0,0,1007,1008,1,0,0,0,1008,1009,1,0,
+ 0,0,1009,1019,3,116,58,0,1010,1012,5,131,0,0,1011,1010,1,0,0,0,1011,
+ 1012,1,0,0,0,1012,1013,1,0,0,0,1013,1015,5,16,0,0,1014,1016,5,131,
+ 0,0,1015,1014,1,0,0,0,1015,1016,1,0,0,0,1016,1017,1,0,0,0,1017,1019,
+ 3,116,58,0,1018,995,1,0,0,0,1018,1003,1,0,0,0,1018,1011,1,0,0,0,
+ 1019,1022,1,0,0,0,1020,1018,1,0,0,0,1020,1021,1,0,0,0,1021,115,1,
+ 0,0,0,1022,1020,1,0,0,0,1023,1034,3,118,59,0,1024,1026,5,131,0,0,
+ 1025,1024,1,0,0,0,1025,1026,1,0,0,0,1026,1027,1,0,0,0,1027,1029,
+ 5,17,0,0,1028,1030,5,131,0,0,1029,1028,1,0,0,0,1029,1030,1,0,0,0,
+ 1030,1031,1,0,0,0,1031,1033,3,118,59,0,1032,1025,1,0,0,0,1033,1036,
+ 1,0,0,0,1034,1032,1,0,0,0,1034,1035,1,0,0,0,1035,117,1,0,0,0,1036,
+ 1034,1,0,0,0,1037,1039,7,1,0,0,1038,1040,5,131,0,0,1039,1038,1,0,
+ 0,0,1039,1040,1,0,0,0,1040,1042,1,0,0,0,1041,1037,1,0,0,0,1042,1045,
+ 1,0,0,0,1043,1041,1,0,0,0,1043,1044,1,0,0,0,1044,1046,1,0,0,0,1045,
+ 1043,1,0,0,0,1046,1047,3,120,60,0,1047,119,1,0,0,0,1048,1054,3,128,
+ 64,0,1049,1053,3,124,62,0,1050,1053,3,122,61,0,1051,1053,3,126,63,
+ 0,1052,1049,1,0,0,0,1052,1050,1,0,0,0,1052,1051,1,0,0,0,1053,1056,
+ 1,0,0,0,1054,1052,1,0,0,0,1054,1055,1,0,0,0,1055,121,1,0,0,0,1056,
+ 1054,1,0,0,0,1057,1058,5,131,0,0,1058,1060,5,82,0,0,1059,1061,5,
+ 131,0,0,1060,1059,1,0,0,0,1060,1061,1,0,0,0,1061,1062,1,0,0,0,1062,
+ 1083,3,128,64,0,1063,1065,5,131,0,0,1064,1063,1,0,0,0,1064,1065,
+ 1,0,0,0,1065,1066,1,0,0,0,1066,1067,5,8,0,0,1067,1068,3,100,50,0,
+ 1068,1069,5,9,0,0,1069,1083,1,0,0,0,1070,1072,5,131,0,0,1071,1070,
+ 1,0,0,0,1071,1072,1,0,0,0,1072,1073,1,0,0,0,1073,1075,5,8,0,0,1074,
+ 1076,3,100,50,0,1075,1074,1,0,0,0,1075,1076,1,0,0,0,1076,1077,1,
+ 0,0,0,1077,1079,5,12,0,0,1078,1080,3,100,50,0,1079,1078,1,0,0,0,
+ 1079,1080,1,0,0,0,1080,1081,1,0,0,0,1081,1083,5,9,0,0,1082,1057,
+ 1,0,0,0,1082,1064,1,0,0,0,1082,1071,1,0,0,0,1083,123,1,0,0,0,1084,
+ 1085,5,131,0,0,1085,1086,5,83,0,0,1086,1087,5,131,0,0,1087,1097,
+ 5,63,0,0,1088,1089,5,131,0,0,1089,1090,5,84,0,0,1090,1091,5,131,
+ 0,0,1091,1097,5,63,0,0,1092,1093,5,131,0,0,1093,1097,5,85,0,0,1094,
+ 1095,5,131,0,0,1095,1097,5,86,0,0,1096,1084,1,0,0,0,1096,1088,1,
+ 0,0,0,1096,1092,1,0,0,0,1096,1094,1,0,0,0,1097,1099,1,0,0,0,1098,
+ 1100,5,131,0,0,1099,1098,1,0,0,0,1099,1100,1,0,0,0,1100,1101,1,0,
+ 0,0,1101,1102,3,128,64,0,1102,125,1,0,0,0,1103,1104,5,131,0,0,1104,
+ 1105,5,87,0,0,1105,1106,5,131,0,0,1106,1114,5,88,0,0,1107,1108,5,
+ 131,0,0,1108,1109,5,87,0,0,1109,1110,5,131,0,0,1110,1111,5,81,0,
+ 0,1111,1112,5,131,0,0,1112,1114,5,88,0,0,1113,1103,1,0,0,0,1113,
+ 1107,1,0,0,0,1114,127,1,0,0,0,1115,1122,3,130,65,0,1116,1118,5,131,
+ 0,0,1117,1116,1,0,0,0,1117,1118,1,0,0,0,1118,1119,1,0,0,0,1119,1121,
+ 3,166,83,0,1120,1117,1,0,0,0,1121,1124,1,0,0,0,1122,1120,1,0,0,0,
+ 1122,1123,1,0,0,0,1123,1129,1,0,0,0,1124,1122,1,0,0,0,1125,1127,
+ 5,131,0,0,1126,1125,1,0,0,0,1126,1127,1,0,0,0,1127,1128,1,0,0,0,
+ 1128,1130,3,90,45,0,1129,1126,1,0,0,0,1129,1130,1,0,0,0,1130,129,
+ 1,0,0,0,1131,1210,3,132,66,0,1132,1210,3,178,89,0,1133,1210,3,168,
+ 84,0,1134,1136,5,89,0,0,1135,1137,5,131,0,0,1136,1135,1,0,0,0,1136,
+ 1137,1,0,0,0,1137,1138,1,0,0,0,1138,1140,5,6,0,0,1139,1141,5,131,
+ 0,0,1140,1139,1,0,0,0,1140,1141,1,0,0,0,1141,1142,1,0,0,0,1142,1144,
+ 5,5,0,0,1143,1145,5,131,0,0,1144,1143,1,0,0,0,1144,1145,1,0,0,0,
+ 1145,1146,1,0,0,0,1146,1210,5,7,0,0,1147,1210,3,162,81,0,1148,1210,
+ 3,164,82,0,1149,1151,5,49,0,0,1150,1152,5,131,0,0,1151,1150,1,0,
+ 0,0,1151,1152,1,0,0,0,1152,1153,1,0,0,0,1153,1155,5,6,0,0,1154,1156,
+ 5,131,0,0,1155,1154,1,0,0,0,1155,1156,1,0,0,0,1156,1157,1,0,0,0,
+ 1157,1159,3,144,72,0,1158,1160,5,131,0,0,1159,1158,1,0,0,0,1159,
+ 1160,1,0,0,0,1160,1161,1,0,0,0,1161,1162,5,7,0,0,1162,1210,1,0,0,
+ 0,1163,1165,5,90,0,0,1164,1166,5,131,0,0,1165,1164,1,0,0,0,1165,
+ 1166,1,0,0,0,1166,1167,1,0,0,0,1167,1169,5,6,0,0,1168,1170,5,131,
+ 0,0,1169,1168,1,0,0,0,1169,1170,1,0,0,0,1170,1171,1,0,0,0,1171,1173,
+ 3,144,72,0,1172,1174,5,131,0,0,1173,1172,1,0,0,0,1173,1174,1,0,0,
+ 0,1174,1175,1,0,0,0,1175,1176,5,7,0,0,1176,1210,1,0,0,0,1177,1179,
+ 5,91,0,0,1178,1180,5,131,0,0,1179,1178,1,0,0,0,1179,1180,1,0,0,0,
+ 1180,1181,1,0,0,0,1181,1183,5,6,0,0,1182,1184,5,131,0,0,1183,1182,
+ 1,0,0,0,1183,1184,1,0,0,0,1184,1185,1,0,0,0,1185,1187,3,144,72,0,
+ 1186,1188,5,131,0,0,1187,1186,1,0,0,0,1187,1188,1,0,0,0,1188,1189,
+ 1,0,0,0,1189,1190,5,7,0,0,1190,1210,1,0,0,0,1191,1193,5,92,0,0,1192,
+ 1194,5,131,0,0,1193,1192,1,0,0,0,1193,1194,1,0,0,0,1194,1195,1,0,
+ 0,0,1195,1197,5,6,0,0,1196,1198,5,131,0,0,1197,1196,1,0,0,0,1197,
+ 1198,1,0,0,0,1198,1199,1,0,0,0,1199,1201,3,144,72,0,1200,1202,5,
+ 131,0,0,1201,1200,1,0,0,0,1201,1202,1,0,0,0,1202,1203,1,0,0,0,1203,
+ 1204,5,7,0,0,1204,1210,1,0,0,0,1205,1210,3,142,71,0,1206,1210,3,
+ 140,70,0,1207,1210,3,148,74,0,1208,1210,3,172,86,0,1209,1131,1,0,
+ 0,0,1209,1132,1,0,0,0,1209,1133,1,0,0,0,1209,1134,1,0,0,0,1209,1147,
+ 1,0,0,0,1209,1148,1,0,0,0,1209,1149,1,0,0,0,1209,1163,1,0,0,0,1209,
+ 1177,1,0,0,0,1209,1191,1,0,0,0,1209,1205,1,0,0,0,1209,1206,1,0,0,
+ 0,1209,1207,1,0,0,0,1209,1208,1,0,0,0,1210,131,1,0,0,0,1211,1218,
+ 3,174,87,0,1212,1218,5,101,0,0,1213,1218,3,134,67,0,1214,1218,5,
+ 88,0,0,1215,1218,3,176,88,0,1216,1218,3,136,68,0,1217,1211,1,0,0,
+ 0,1217,1212,1,0,0,0,1217,1213,1,0,0,0,1217,1214,1,0,0,0,1217,1215,
+ 1,0,0,0,1217,1216,1,0,0,0,1218,133,1,0,0,0,1219,1220,7,2,0,0,1220,
+ 135,1,0,0,0,1221,1223,5,8,0,0,1222,1224,5,131,0,0,1223,1222,1,0,
+ 0,0,1223,1224,1,0,0,0,1224,1242,1,0,0,0,1225,1227,3,100,50,0,1226,
+ 1228,5,131,0,0,1227,1226,1,0,0,0,1227,1228,1,0,0,0,1228,1239,1,0,
+ 0,0,1229,1231,5,2,0,0,1230,1232,5,131,0,0,1231,1230,1,0,0,0,1231,
+ 1232,1,0,0,0,1232,1233,1,0,0,0,1233,1235,3,100,50,0,1234,1236,5,
+ 131,0,0,1235,1234,1,0,0,0,1235,1236,1,0,0,0,1236,1238,1,0,0,0,1237,
+ 1229,1,0,0,0,1238,1241,1,0,0,0,1239,1237,1,0,0,0,1239,1240,1,0,0,
+ 0,1240,1243,1,0,0,0,1241,1239,1,0,0,0,1242,1225,1,0,0,0,1242,1243,
+ 1,0,0,0,1243,1244,1,0,0,0,1244,1245,5,9,0,0,1245,137,1,0,0,0,1246,
+ 1248,5,3,0,0,1247,1249,5,131,0,0,1248,1247,1,0,0,0,1248,1249,1,0,
+ 0,0,1249,1250,1,0,0,0,1250,1277,3,112,56,0,1251,1253,5,18,0,0,1252,
+ 1254,5,131,0,0,1253,1252,1,0,0,0,1253,1254,1,0,0,0,1254,1255,1,0,
+ 0,0,1255,1277,3,112,56,0,1256,1258,5,19,0,0,1257,1259,5,131,0,0,
+ 1258,1257,1,0,0,0,1258,1259,1,0,0,0,1259,1260,1,0,0,0,1260,1277,
+ 3,112,56,0,1261,1263,5,20,0,0,1262,1264,5,131,0,0,1263,1262,1,0,
+ 0,0,1263,1264,1,0,0,0,1264,1265,1,0,0,0,1265,1277,3,112,56,0,1266,
+ 1268,5,21,0,0,1267,1269,5,131,0,0,1268,1267,1,0,0,0,1268,1269,1,
+ 0,0,0,1269,1270,1,0,0,0,1270,1277,3,112,56,0,1271,1273,5,22,0,0,
+ 1272,1274,5,131,0,0,1273,1272,1,0,0,0,1273,1274,1,0,0,0,1274,1275,
+ 1,0,0,0,1275,1277,3,112,56,0,1276,1246,1,0,0,0,1276,1251,1,0,0,0,
+ 1276,1256,1,0,0,0,1276,1261,1,0,0,0,1276,1266,1,0,0,0,1276,1271,
+ 1,0,0,0,1277,139,1,0,0,0,1278,1280,5,6,0,0,1279,1281,5,131,0,0,1280,
+ 1279,1,0,0,0,1280,1281,1,0,0,0,1281,1282,1,0,0,0,1282,1284,3,100,
+ 50,0,1283,1285,5,131,0,0,1284,1283,1,0,0,0,1284,1285,1,0,0,0,1285,
+ 1286,1,0,0,0,1286,1287,5,7,0,0,1287,141,1,0,0,0,1288,1293,3,78,39,
+ 0,1289,1291,5,131,0,0,1290,1289,1,0,0,0,1290,1291,1,0,0,0,1291,1292,
+ 1,0,0,0,1292,1294,3,80,40,0,1293,1290,1,0,0,0,1294,1295,1,0,0,0,
+ 1295,1293,1,0,0,0,1295,1296,1,0,0,0,1296,143,1,0,0,0,1297,1302,3,
+ 146,73,0,1298,1300,5,131,0,0,1299,1298,1,0,0,0,1299,1300,1,0,0,0,
+ 1300,1301,1,0,0,0,1301,1303,3,68,34,0,1302,1299,1,0,0,0,1302,1303,
+ 1,0,0,0,1303,145,1,0,0,0,1304,1305,3,172,86,0,1305,1306,5,131,0,
+ 0,1306,1307,5,82,0,0,1307,1308,5,131,0,0,1308,1309,3,100,50,0,1309,
+ 147,1,0,0,0,1310,1312,3,150,75,0,1311,1313,5,131,0,0,1312,1311,1,
+ 0,0,0,1312,1313,1,0,0,0,1313,1314,1,0,0,0,1314,1316,5,6,0,0,1315,
+ 1317,5,131,0,0,1316,1315,1,0,0,0,1316,1317,1,0,0,0,1317,1322,1,0,
+ 0,0,1318,1320,5,64,0,0,1319,1321,5,131,0,0,1320,1319,1,0,0,0,1320,
+ 1321,1,0,0,0,1321,1323,1,0,0,0,1322,1318,1,0,0,0,1322,1323,1,0,0,
+ 0,1323,1341,1,0,0,0,1324,1326,3,100,50,0,1325,1327,5,131,0,0,1326,
+ 1325,1,0,0,0,1326,1327,1,0,0,0,1327,1338,1,0,0,0,1328,1330,5,2,0,
+ 0,1329,1331,5,131,0,0,1330,1329,1,0,0,0,1330,1331,1,0,0,0,1331,1332,
+ 1,0,0,0,1332,1334,3,100,50,0,1333,1335,5,131,0,0,1334,1333,1,0,0,
+ 0,1334,1335,1,0,0,0,1335,1337,1,0,0,0,1336,1328,1,0,0,0,1337,1340,
+ 1,0,0,0,1338,1336,1,0,0,0,1338,1339,1,0,0,0,1339,1342,1,0,0,0,1340,
+ 1338,1,0,0,0,1341,1324,1,0,0,0,1341,1342,1,0,0,0,1342,1343,1,0,0,
+ 0,1343,1344,5,7,0,0,1344,149,1,0,0,0,1345,1346,3,160,80,0,1346,1347,
+ 3,190,95,0,1347,1350,1,0,0,0,1348,1350,5,95,0,0,1349,1345,1,0,0,
+ 0,1349,1348,1,0,0,0,1350,151,1,0,0,0,1351,1353,3,158,79,0,1352,1354,
+ 5,131,0,0,1353,1352,1,0,0,0,1353,1354,1,0,0,0,1354,1355,1,0,0,0,
+ 1355,1357,5,6,0,0,1356,1358,5,131,0,0,1357,1356,1,0,0,0,1357,1358,
+ 1,0,0,0,1358,1376,1,0,0,0,1359,1361,3,100,50,0,1360,1362,5,131,0,
+ 0,1361,1360,1,0,0,0,1361,1362,1,0,0,0,1362,1373,1,0,0,0,1363,1365,
+ 5,2,0,0,1364,1366,5,131,0,0,1365,1364,1,0,0,0,1365,1366,1,0,0,0,
+ 1366,1367,1,0,0,0,1367,1369,3,100,50,0,1368,1370,5,131,0,0,1369,
+ 1368,1,0,0,0,1369,1370,1,0,0,0,1370,1372,1,0,0,0,1371,1363,1,0,0,
+ 0,1372,1375,1,0,0,0,1373,1371,1,0,0,0,1373,1374,1,0,0,0,1374,1377,
+ 1,0,0,0,1375,1373,1,0,0,0,1376,1359,1,0,0,0,1376,1377,1,0,0,0,1377,
+ 1378,1,0,0,0,1378,1379,5,7,0,0,1379,153,1,0,0,0,1380,1381,3,158,
+ 79,0,1381,155,1,0,0,0,1382,1383,3,190,95,0,1383,157,1,0,0,0,1384,
+ 1385,3,160,80,0,1385,1386,3,190,95,0,1386,159,1,0,0,0,1387,1388,
+ 3,190,95,0,1388,1389,5,23,0,0,1389,1391,1,0,0,0,1390,1387,1,0,0,
+ 0,1391,1394,1,0,0,0,1392,1390,1,0,0,0,1392,1393,1,0,0,0,1393,161,
+ 1,0,0,0,1394,1392,1,0,0,0,1395,1397,5,8,0,0,1396,1398,5,131,0,0,
+ 1397,1396,1,0,0,0,1397,1398,1,0,0,0,1398,1399,1,0,0,0,1399,1408,
+ 3,144,72,0,1400,1402,5,131,0,0,1401,1400,1,0,0,0,1401,1402,1,0,0,
+ 0,1402,1403,1,0,0,0,1403,1405,5,11,0,0,1404,1406,5,131,0,0,1405,
+ 1404,1,0,0,0,1405,1406,1,0,0,0,1406,1407,1,0,0,0,1407,1409,3,100,
+ 50,0,1408,1401,1,0,0,0,1408,1409,1,0,0,0,1409,1411,1,0,0,0,1410,
+ 1412,5,131,0,0,1411,1410,1,0,0,0,1411,1412,1,0,0,0,1412,1413,1,0,
+ 0,0,1413,1414,5,9,0,0,1414,163,1,0,0,0,1415,1417,5,8,0,0,1416,1418,
+ 5,131,0,0,1417,1416,1,0,0,0,1417,1418,1,0,0,0,1418,1427,1,0,0,0,
+ 1419,1421,3,172,86,0,1420,1422,5,131,0,0,1421,1420,1,0,0,0,1421,
+ 1422,1,0,0,0,1422,1423,1,0,0,0,1423,1425,5,3,0,0,1424,1426,5,131,
+ 0,0,1425,1424,1,0,0,0,1425,1426,1,0,0,0,1426,1428,1,0,0,0,1427,1419,
+ 1,0,0,0,1427,1428,1,0,0,0,1428,1429,1,0,0,0,1429,1431,3,142,71,0,
+ 1430,1432,5,131,0,0,1431,1430,1,0,0,0,1431,1432,1,0,0,0,1432,1441,
+ 1,0,0,0,1433,1435,5,77,0,0,1434,1436,5,131,0,0,1435,1434,1,0,0,0,
+ 1435,1436,1,0,0,0,1436,1437,1,0,0,0,1437,1439,3,100,50,0,1438,1440,
+ 5,131,0,0,1439,1438,1,0,0,0,1439,1440,1,0,0,0,1440,1442,1,0,0,0,
+ 1441,1433,1,0,0,0,1441,1442,1,0,0,0,1442,1443,1,0,0,0,1443,1445,
+ 5,11,0,0,1444,1446,5,131,0,0,1445,1444,1,0,0,0,1445,1446,1,0,0,0,
+ 1446,1447,1,0,0,0,1447,1449,3,100,50,0,1448,1450,5,131,0,0,1449,
+ 1448,1,0,0,0,1449,1450,1,0,0,0,1450,1451,1,0,0,0,1451,1452,5,9,0,
+ 0,1452,165,1,0,0,0,1453,1455,5,23,0,0,1454,1456,5,131,0,0,1455,1454,
+ 1,0,0,0,1455,1456,1,0,0,0,1456,1457,1,0,0,0,1457,1458,3,182,91,0,
+ 1458,167,1,0,0,0,1459,1464,5,96,0,0,1460,1462,5,131,0,0,1461,1460,
+ 1,0,0,0,1461,1462,1,0,0,0,1462,1463,1,0,0,0,1463,1465,3,170,85,0,
+ 1464,1461,1,0,0,0,1465,1466,1,0,0,0,1466,1464,1,0,0,0,1466,1467,
+ 1,0,0,0,1467,1482,1,0,0,0,1468,1470,5,96,0,0,1469,1471,5,131,0,0,
+ 1470,1469,1,0,0,0,1470,1471,1,0,0,0,1471,1472,1,0,0,0,1472,1477,
+ 3,100,50,0,1473,1475,5,131,0,0,1474,1473,1,0,0,0,1474,1475,1,0,0,
+ 0,1475,1476,1,0,0,0,1476,1478,3,170,85,0,1477,1474,1,0,0,0,1478,
+ 1479,1,0,0,0,1479,1477,1,0,0,0,1479,1480,1,0,0,0,1480,1482,1,0,0,
+ 0,1481,1459,1,0,0,0,1481,1468,1,0,0,0,1482,1491,1,0,0,0,1483,1485,
+ 5,131,0,0,1484,1483,1,0,0,0,1484,1485,1,0,0,0,1485,1486,1,0,0,0,
+ 1486,1488,5,97,0,0,1487,1489,5,131,0,0,1488,1487,1,0,0,0,1488,1489,
+ 1,0,0,0,1489,1490,1,0,0,0,1490,1492,3,100,50,0,1491,1484,1,0,0,0,
+ 1491,1492,1,0,0,0,1492,1494,1,0,0,0,1493,1495,5,131,0,0,1494,1493,
+ 1,0,0,0,1494,1495,1,0,0,0,1495,1496,1,0,0,0,1496,1497,5,98,0,0,1497,
+ 169,1,0,0,0,1498,1500,5,99,0,0,1499,1501,5,131,0,0,1500,1499,1,0,
+ 0,0,1500,1501,1,0,0,0,1501,1502,1,0,0,0,1502,1504,3,100,50,0,1503,
+ 1505,5,131,0,0,1504,1503,1,0,0,0,1504,1505,1,0,0,0,1505,1506,1,0,
+ 0,0,1506,1508,5,100,0,0,1507,1509,5,131,0,0,1508,1507,1,0,0,0,1508,
+ 1509,1,0,0,0,1509,1510,1,0,0,0,1510,1511,3,100,50,0,1511,171,1,0,
+ 0,0,1512,1513,3,190,95,0,1513,173,1,0,0,0,1514,1517,3,186,93,0,1515,
+ 1517,3,184,92,0,1516,1514,1,0,0,0,1516,1515,1,0,0,0,1517,175,1,0,
+ 0,0,1518,1520,5,24,0,0,1519,1521,5,131,0,0,1520,1519,1,0,0,0,1520,
+ 1521,1,0,0,0,1521,1555,1,0,0,0,1522,1524,3,182,91,0,1523,1525,5,
+ 131,0,0,1524,1523,1,0,0,0,1524,1525,1,0,0,0,1525,1526,1,0,0,0,1526,
+ 1528,5,10,0,0,1527,1529,5,131,0,0,1528,1527,1,0,0,0,1528,1529,1,
+ 0,0,0,1529,1530,1,0,0,0,1530,1532,3,100,50,0,1531,1533,5,131,0,0,
+ 1532,1531,1,0,0,0,1532,1533,1,0,0,0,1533,1552,1,0,0,0,1534,1536,
+ 5,2,0,0,1535,1537,5,131,0,0,1536,1535,1,0,0,0,1536,1537,1,0,0,0,
+ 1537,1538,1,0,0,0,1538,1540,3,182,91,0,1539,1541,5,131,0,0,1540,
+ 1539,1,0,0,0,1540,1541,1,0,0,0,1541,1542,1,0,0,0,1542,1544,5,10,
+ 0,0,1543,1545,5,131,0,0,1544,1543,1,0,0,0,1544,1545,1,0,0,0,1545,
+ 1546,1,0,0,0,1546,1548,3,100,50,0,1547,1549,5,131,0,0,1548,1547,
+ 1,0,0,0,1548,1549,1,0,0,0,1549,1551,1,0,0,0,1550,1534,1,0,0,0,1551,
+ 1554,1,0,0,0,1552,1550,1,0,0,0,1552,1553,1,0,0,0,1553,1556,1,0,0,
+ 0,1554,1552,1,0,0,0,1555,1522,1,0,0,0,1555,1556,1,0,0,0,1556,1557,
+ 1,0,0,0,1557,1558,5,25,0,0,1558,177,1,0,0,0,1559,1562,5,26,0,0,1560,
+ 1563,3,190,95,0,1561,1563,5,104,0,0,1562,1560,1,0,0,0,1562,1561,
+ 1,0,0,0,1563,179,1,0,0,0,1564,1569,3,130,65,0,1565,1567,5,131,0,
+ 0,1566,1565,1,0,0,0,1566,1567,1,0,0,0,1567,1568,1,0,0,0,1568,1570,
+ 3,166,83,0,1569,1566,1,0,0,0,1570,1571,1,0,0,0,1571,1569,1,0,0,0,
+ 1571,1572,1,0,0,0,1572,181,1,0,0,0,1573,1574,3,188,94,0,1574,183,
+ 1,0,0,0,1575,1576,7,3,0,0,1576,185,1,0,0,0,1577,1578,7,4,0,0,1578,
+ 187,1,0,0,0,1579,1582,3,190,95,0,1580,1582,3,192,96,0,1581,1579,
+ 1,0,0,0,1581,1580,1,0,0,0,1582,189,1,0,0,0,1583,1584,7,5,0,0,1584,
+ 191,1,0,0,0,1585,1586,7,6,0,0,1586,193,1,0,0,0,1587,1588,7,7,0,0,
+ 1588,195,1,0,0,0,1589,1590,7,8,0,0,1590,197,1,0,0,0,1591,1592,7,
+ 9,0,0,1592,199,1,0,0,0,293,201,205,208,211,218,223,226,230,234,239,
+ 246,251,254,258,262,266,272,276,281,286,290,293,295,299,303,308,
+ 312,317,321,330,335,339,343,347,352,356,359,363,373,380,393,397,
+ 403,407,411,416,421,425,431,435,441,445,451,455,459,463,467,471,
+ 476,483,487,492,499,505,510,516,522,527,531,536,539,542,545,552,
+ 558,561,566,569,573,576,584,588,592,596,600,605,610,614,619,622,
+ 631,640,645,658,661,677,685,689,694,699,703,708,714,719,726,730,
+ 734,736,740,742,746,748,754,760,764,767,770,774,780,784,787,790,
+ 796,799,802,806,812,815,818,822,826,830,832,836,838,841,845,847,
+ 853,857,861,865,868,873,878,883,888,894,898,900,904,908,910,912,
+ 927,937,947,952,956,963,968,973,977,981,985,988,990,995,999,1003,
+ 1007,1011,1015,1018,1020,1025,1029,1034,1039,1043,1052,1054,1060,
+ 1064,1071,1075,1079,1082,1096,1099,1113,1117,1122,1126,1129,1136,
+ 1140,1144,1151,1155,1159,1165,1169,1173,1179,1183,1187,1193,1197,
+ 1201,1209,1217,1223,1227,1231,1235,1239,1242,1248,1253,1258,1263,
+ 1268,1273,1276,1280,1284,1290,1295,1299,1302,1312,1316,1320,1322,
+ 1326,1330,1334,1338,1341,1349,1353,1357,1361,1365,1369,1373,1376,
+ 1392,1397,1401,1405,1408,1411,1417,1421,1425,1427,1431,1435,1439,
+ 1441,1445,1449,1455,1461,1466,1470,1474,1479,1481,1484,1488,1491,
+ 1494,1500,1504,1508,1516,1520,1524,1528,1532,1536,1540,1544,1548,
+ 1552,1555,1562,1566,1571,1581
+ ]
+
+class LcypherParser ( Parser ):
+
+ grammarFileName = "Lcypher.g4"
+
+ atn = ATNDeserializer().deserialize(serializedATN())
+
+ decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+ sharedContextCache = PredictionContextCache()
+
+ literalNames = [ "", "';'", "','", "'='", "'+='", "'*'", "'('",
+ "')'", "'['", "']'", "':'", "'|'", "'..'", "'+'", "'-'",
+ "'/'", "'%'", "'^'", "'<>'", "'<'", "'>'", "'<='",
+ "'>='", "'.'", "'{'", "'}'", "'$'", "'\\u27E8'", "'\\u3008'",
+ "'\\uFE64'", "'\\uFF1C'", "'\\u27E9'", "'\\u3009'",
+ "'\\uFE65'", "'\\uFF1E'", "'\\u00AD'", "'\\u2010'",
+ "'\\u2011'", "'\\u2012'", "'\\u2013'", "'\\u2014'",
+ "'\\u2015'", "'\\u2212'", "'\\uFE58'", "'\\uFE63'",
+ "'\\uFF0D'", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "'0'" ]
+
+ symbolicNames = [ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "", "",
+ "", "", "EXPLAIN", "PROFILE", "UNION",
+ "ALL", "OPTIONAL_", "MATCH", "UNWIND", "AS", "MERGE",
+ "ON", "CREATE", "SET", "DETACH", "DELETE_", "REMOVE",
+ "CALL", "YIELD", "WITH", "DISTINCT", "RETURN", "ORDER",
+ "BY", "L_SKIP", "LIMIT", "ASCENDING", "ASC", "DESCENDING",
+ "DESC", "USING", "JOIN", "START", "WHERE", "OR", "XOR",
+ "AND", "NOT", "IN", "STARTS", "ENDS", "CONTAINS",
+ "REGEXP", "IS", "NULL_", "COUNT", "ANY", "NONE", "SINGLE",
+ "TRUE_", "FALSE_", "EXISTS", "CASE", "ELSE", "END",
+ "WHEN", "THEN", "StringLiteral", "EscapedChar", "HexInteger",
+ "DecimalInteger", "OctalInteger", "HexLetter", "HexDigit",
+ "Digit", "NonZeroDigit", "NonZeroOctDigit", "OctDigit",
+ "ZeroDigit", "ExponentDecimalReal", "RegularDecimalReal",
+ "FILTER", "EXTRACT", "UnescapedSymbolicName", "CONSTRAINT",
+ "DO", "FOR", "REQUIRE", "UNIQUE", "MANDATORY", "SCALAR",
+ "OF", "ADD", "DROP", "IdentifierStart", "IdentifierPart",
+ "EscapedSymbolicName", "SP", "WHITESPACE", "Comment" ]
+
+ RULE_oC_Cypher = 0
+ RULE_oC_Statement = 1
+ RULE_oC_Query = 2
+ RULE_oC_RegularQuery = 3
+ RULE_oC_Union = 4
+ RULE_oC_SingleQuery = 5
+ RULE_oC_SinglePartQuery = 6
+ RULE_oC_MultiPartQuery = 7
+ RULE_oC_UpdatingClause = 8
+ RULE_oC_ReadingClause = 9
+ RULE_oC_Match = 10
+ RULE_oC_Unwind = 11
+ RULE_oC_Merge = 12
+ RULE_oC_MergeAction = 13
+ RULE_oC_Create = 14
+ RULE_oC_Set = 15
+ RULE_oC_SetItem = 16
+ RULE_oC_Delete = 17
+ RULE_oC_Remove = 18
+ RULE_oC_RemoveItem = 19
+ RULE_oC_InQueryCall = 20
+ RULE_oC_StandaloneCall = 21
+ RULE_oC_YieldItems = 22
+ RULE_oC_YieldItem = 23
+ RULE_oC_With = 24
+ RULE_oC_Return = 25
+ RULE_oC_ReturnBody = 26
+ RULE_oC_ReturnItems = 27
+ RULE_oC_ReturnItem = 28
+ RULE_oC_Order = 29
+ RULE_oC_Skip = 30
+ RULE_oC_Limit = 31
+ RULE_oC_SortItem = 32
+ RULE_oC_Hint = 33
+ RULE_oC_Where = 34
+ RULE_oC_Pattern = 35
+ RULE_oC_PatternPart = 36
+ RULE_oC_AnonymousPatternPart = 37
+ RULE_oC_PatternElement = 38
+ RULE_oC_NodePattern = 39
+ RULE_oC_PatternElementChain = 40
+ RULE_oC_RelationshipPattern = 41
+ RULE_oC_RelationshipDetail = 42
+ RULE_oC_Properties = 43
+ RULE_oC_RelationshipTypes = 44
+ RULE_oC_NodeLabels = 45
+ RULE_oC_NodeLabel = 46
+ RULE_oC_RangeLiteral = 47
+ RULE_oC_LabelName = 48
+ RULE_oC_RelTypeName = 49
+ RULE_oC_Expression = 50
+ RULE_oC_OrExpression = 51
+ RULE_oC_XorExpression = 52
+ RULE_oC_AndExpression = 53
+ RULE_oC_NotExpression = 54
+ RULE_oC_ComparisonExpression = 55
+ RULE_oC_AddOrSubtractExpression = 56
+ RULE_oC_MultiplyDivideModuloExpression = 57
+ RULE_oC_PowerOfExpression = 58
+ RULE_oC_UnaryAddOrSubtractExpression = 59
+ RULE_oC_StringListNullOperatorExpression = 60
+ RULE_oC_ListOperatorExpression = 61
+ RULE_oC_StringOperatorExpression = 62
+ RULE_oC_NullOperatorExpression = 63
+ RULE_oC_PropertyOrLabelsExpression = 64
+ RULE_oC_Atom = 65
+ RULE_oC_Literal = 66
+ RULE_oC_BooleanLiteral = 67
+ RULE_oC_ListLiteral = 68
+ RULE_oC_PartialComparisonExpression = 69
+ RULE_oC_ParenthesizedExpression = 70
+ RULE_oC_RelationshipsPattern = 71
+ RULE_oC_FilterExpression = 72
+ RULE_oC_IdInColl = 73
+ RULE_oC_FunctionInvocation = 74
+ RULE_oC_FunctionName = 75
+ RULE_oC_ExplicitProcedureInvocation = 76
+ RULE_oC_ImplicitProcedureInvocation = 77
+ RULE_oC_ProcedureResultField = 78
+ RULE_oC_ProcedureName = 79
+ RULE_oC_Namespace = 80
+ RULE_oC_ListComprehension = 81
+ RULE_oC_PatternComprehension = 82
+ RULE_oC_PropertyLookup = 83
+ RULE_oC_CaseExpression = 84
+ RULE_oC_CaseAlternatives = 85
+ RULE_oC_Variable = 86
+ RULE_oC_NumberLiteral = 87
+ RULE_oC_MapLiteral = 88
+ RULE_oC_Parameter = 89
+ RULE_oC_PropertyExpression = 90
+ RULE_oC_PropertyKeyName = 91
+ RULE_oC_IntegerLiteral = 92
+ RULE_oC_DoubleLiteral = 93
+ RULE_oC_SchemaName = 94
+ RULE_oC_SymbolicName = 95
+ RULE_oC_ReservedWord = 96
+ RULE_oC_LeftArrowHead = 97
+ RULE_oC_RightArrowHead = 98
+ RULE_oC_Dash = 99
+
+ ruleNames = [ "oC_Cypher", "oC_Statement", "oC_Query", "oC_RegularQuery",
+ "oC_Union", "oC_SingleQuery", "oC_SinglePartQuery", "oC_MultiPartQuery",
+ "oC_UpdatingClause", "oC_ReadingClause", "oC_Match",
+ "oC_Unwind", "oC_Merge", "oC_MergeAction", "oC_Create",
+ "oC_Set", "oC_SetItem", "oC_Delete", "oC_Remove", "oC_RemoveItem",
+ "oC_InQueryCall", "oC_StandaloneCall", "oC_YieldItems",
+ "oC_YieldItem", "oC_With", "oC_Return", "oC_ReturnBody",
+ "oC_ReturnItems", "oC_ReturnItem", "oC_Order", "oC_Skip",
+ "oC_Limit", "oC_SortItem", "oC_Hint", "oC_Where", "oC_Pattern",
+ "oC_PatternPart", "oC_AnonymousPatternPart", "oC_PatternElement",
+ "oC_NodePattern", "oC_PatternElementChain", "oC_RelationshipPattern",
+ "oC_RelationshipDetail", "oC_Properties", "oC_RelationshipTypes",
+ "oC_NodeLabels", "oC_NodeLabel", "oC_RangeLiteral", "oC_LabelName",
+ "oC_RelTypeName", "oC_Expression", "oC_OrExpression",
+ "oC_XorExpression", "oC_AndExpression", "oC_NotExpression",
+ "oC_ComparisonExpression", "oC_AddOrSubtractExpression",
+ "oC_MultiplyDivideModuloExpression", "oC_PowerOfExpression",
+ "oC_UnaryAddOrSubtractExpression", "oC_StringListNullOperatorExpression",
+ "oC_ListOperatorExpression", "oC_StringOperatorExpression",
+ "oC_NullOperatorExpression", "oC_PropertyOrLabelsExpression",
+ "oC_Atom", "oC_Literal", "oC_BooleanLiteral", "oC_ListLiteral",
+ "oC_PartialComparisonExpression", "oC_ParenthesizedExpression",
+ "oC_RelationshipsPattern", "oC_FilterExpression", "oC_IdInColl",
+ "oC_FunctionInvocation", "oC_FunctionName", "oC_ExplicitProcedureInvocation",
+ "oC_ImplicitProcedureInvocation", "oC_ProcedureResultField",
+ "oC_ProcedureName", "oC_Namespace", "oC_ListComprehension",
+ "oC_PatternComprehension", "oC_PropertyLookup", "oC_CaseExpression",
+ "oC_CaseAlternatives", "oC_Variable", "oC_NumberLiteral",
+ "oC_MapLiteral", "oC_Parameter", "oC_PropertyExpression",
+ "oC_PropertyKeyName", "oC_IntegerLiteral", "oC_DoubleLiteral",
+ "oC_SchemaName", "oC_SymbolicName", "oC_ReservedWord",
+ "oC_LeftArrowHead", "oC_RightArrowHead", "oC_Dash" ]
+
+ EOF = Token.EOF
+ T__0=1
+ T__1=2
+ T__2=3
+ T__3=4
+ T__4=5
+ T__5=6
+ T__6=7
+ T__7=8
+ T__8=9
+ T__9=10
+ T__10=11
+ T__11=12
+ T__12=13
+ T__13=14
+ T__14=15
+ T__15=16
+ T__16=17
+ T__17=18
+ T__18=19
+ T__19=20
+ T__20=21
+ T__21=22
+ T__22=23
+ T__23=24
+ T__24=25
+ T__25=26
+ T__26=27
+ T__27=28
+ T__28=29
+ T__29=30
+ T__30=31
+ T__31=32
+ T__32=33
+ T__33=34
+ T__34=35
+ T__35=36
+ T__36=37
+ T__37=38
+ T__38=39
+ T__39=40
+ T__40=41
+ T__41=42
+ T__42=43
+ T__43=44
+ T__44=45
+ EXPLAIN=46
+ PROFILE=47
+ UNION=48
+ ALL=49
+ OPTIONAL_=50
+ MATCH=51
+ UNWIND=52
+ AS=53
+ MERGE=54
+ ON=55
+ CREATE=56
+ SET=57
+ DETACH=58
+ DELETE_=59
+ REMOVE=60
+ CALL=61
+ YIELD=62
+ WITH=63
+ DISTINCT=64
+ RETURN=65
+ ORDER=66
+ BY=67
+ L_SKIP=68
+ LIMIT=69
+ ASCENDING=70
+ ASC=71
+ DESCENDING=72
+ DESC=73
+ USING=74
+ JOIN=75
+ START=76
+ WHERE=77
+ OR=78
+ XOR=79
+ AND=80
+ NOT=81
+ IN=82
+ STARTS=83
+ ENDS=84
+ CONTAINS=85
+ REGEXP=86
+ IS=87
+ NULL_=88
+ COUNT=89
+ ANY=90
+ NONE=91
+ SINGLE=92
+ TRUE_=93
+ FALSE_=94
+ EXISTS=95
+ CASE=96
+ ELSE=97
+ END=98
+ WHEN=99
+ THEN=100
+ StringLiteral=101
+ EscapedChar=102
+ HexInteger=103
+ DecimalInteger=104
+ OctalInteger=105
+ HexLetter=106
+ HexDigit=107
+ Digit=108
+ NonZeroDigit=109
+ NonZeroOctDigit=110
+ OctDigit=111
+ ZeroDigit=112
+ ExponentDecimalReal=113
+ RegularDecimalReal=114
+ FILTER=115
+ EXTRACT=116
+ UnescapedSymbolicName=117
+ CONSTRAINT=118
+ DO=119
+ FOR=120
+ REQUIRE=121
+ UNIQUE=122
+ MANDATORY=123
+ SCALAR=124
+ OF=125
+ ADD=126
+ DROP=127
+ IdentifierStart=128
+ IdentifierPart=129
+ EscapedSymbolicName=130
+ SP=131
+ WHITESPACE=132
+ Comment=133
+
+ def __init__(self, input:TokenStream, output:TextIO = sys.stdout):
+ super().__init__(input, output)
+ self.checkVersion("4.13.2")
+ self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
+ self._predicates = None
+
+
+
+
+ class OC_CypherContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Statement(self):
+ return self.getTypedRuleContext(LcypherParser.OC_StatementContext,0)
+
+
+ def EOF(self):
+ return self.getToken(LcypherParser.EOF, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Cypher
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Cypher" ):
+ listener.enterOC_Cypher(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Cypher" ):
+ listener.exitOC_Cypher(self)
+
+
+
+
+ def oC_Cypher(self):
+
+ localctx = LcypherParser.OC_CypherContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 0, self.RULE_oC_Cypher)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 201
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 200
+ self.match(LcypherParser.SP)
+
+
+ self.state = 203
+ self.oC_Statement()
+ self.state = 208
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,2,self._ctx)
+ if la_ == 1:
+ self.state = 205
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 204
+ self.match(LcypherParser.SP)
+
+
+ self.state = 207
+ self.match(LcypherParser.T__0)
+
+
+ self.state = 211
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 210
+ self.match(LcypherParser.SP)
+
+
+ self.state = 213
+ self.match(LcypherParser.EOF)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_StatementContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Query(self):
+ return self.getTypedRuleContext(LcypherParser.OC_QueryContext,0)
+
+
+ def EXPLAIN(self):
+ return self.getToken(LcypherParser.EXPLAIN, 0)
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def PROFILE(self):
+ return self.getToken(LcypherParser.PROFILE, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Statement
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Statement" ):
+ listener.enterOC_Statement(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Statement" ):
+ listener.exitOC_Statement(self)
+
+
+
+
+ def oC_Statement(self):
+
+ localctx = LcypherParser.OC_StatementContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 2, self.RULE_oC_Statement)
+ self._la = 0 # Token type
+ try:
+ self.state = 226
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [50, 51, 52, 54, 56, 57, 58, 59, 60, 61, 63, 65]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 215
+ self.oC_Query()
+ pass
+ elif token in [46]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 216
+ self.match(LcypherParser.EXPLAIN)
+ self.state = 218
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 217
+ self.match(LcypherParser.SP)
+
+
+ self.state = 220
+ self.oC_Query()
+ pass
+ elif token in [47]:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 221
+ self.match(LcypherParser.PROFILE)
+ self.state = 223
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 222
+ self.match(LcypherParser.SP)
+
+
+ self.state = 225
+ self.oC_Query()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_QueryContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_RegularQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RegularQueryContext,0)
+
+
+ def oC_StandaloneCall(self):
+ return self.getTypedRuleContext(LcypherParser.OC_StandaloneCallContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Query
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Query" ):
+ listener.enterOC_Query(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Query" ):
+ listener.exitOC_Query(self)
+
+
+
+
+ def oC_Query(self):
+
+ localctx = LcypherParser.OC_QueryContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 4, self.RULE_oC_Query)
+ try:
+ self.state = 230
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,7,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 228
+ self.oC_RegularQuery()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 229
+ self.oC_StandaloneCall()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RegularQueryContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SingleQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SingleQueryContext,0)
+
+
+ def oC_Union(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_UnionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_UnionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RegularQuery
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RegularQuery" ):
+ listener.enterOC_RegularQuery(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RegularQuery" ):
+ listener.exitOC_RegularQuery(self)
+
+
+
+
+ def oC_RegularQuery(self):
+
+ localctx = LcypherParser.OC_RegularQueryContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 6, self.RULE_oC_RegularQuery)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 232
+ self.oC_SingleQuery()
+ self.state = 239
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,9,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 234
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 233
+ self.match(LcypherParser.SP)
+
+
+ self.state = 236
+ self.oC_Union()
+ self.state = 241
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,9,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_UnionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def UNION(self):
+ return self.getToken(LcypherParser.UNION, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def ALL(self):
+ return self.getToken(LcypherParser.ALL, 0)
+
+ def oC_SingleQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SingleQueryContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Union
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Union" ):
+ listener.enterOC_Union(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Union" ):
+ listener.exitOC_Union(self)
+
+
+
+
+ def oC_Union(self):
+
+ localctx = LcypherParser.OC_UnionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 8, self.RULE_oC_Union)
+ self._la = 0 # Token type
+ try:
+ self.state = 254
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,12,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 242
+ self.match(LcypherParser.UNION)
+ self.state = 243
+ self.match(LcypherParser.SP)
+ self.state = 244
+ self.match(LcypherParser.ALL)
+ self.state = 246
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 245
+ self.match(LcypherParser.SP)
+
+
+ self.state = 248
+ self.oC_SingleQuery()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 249
+ self.match(LcypherParser.UNION)
+ self.state = 251
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 250
+ self.match(LcypherParser.SP)
+
+
+ self.state = 253
+ self.oC_SingleQuery()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SingleQueryContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SinglePartQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SinglePartQueryContext,0)
+
+
+ def oC_MultiPartQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_MultiPartQueryContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SingleQuery
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SingleQuery" ):
+ listener.enterOC_SingleQuery(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SingleQuery" ):
+ listener.exitOC_SingleQuery(self)
+
+
+
+
+ def oC_SingleQuery(self):
+
+ localctx = LcypherParser.OC_SingleQueryContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 10, self.RULE_oC_SingleQuery)
+ try:
+ self.state = 258
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,13,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 256
+ self.oC_SinglePartQuery()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 257
+ self.oC_MultiPartQuery()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SinglePartQueryContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Return(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ReturnContext,0)
+
+
+ def oC_ReadingClause(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ReadingClauseContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ReadingClauseContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_UpdatingClause(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_UpdatingClauseContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_UpdatingClauseContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SinglePartQuery
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SinglePartQuery" ):
+ listener.enterOC_SinglePartQuery(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SinglePartQuery" ):
+ listener.exitOC_SinglePartQuery(self)
+
+
+
+
+ def oC_SinglePartQuery(self):
+
+ localctx = LcypherParser.OC_SinglePartQueryContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 12, self.RULE_oC_SinglePartQuery)
+ self._la = 0 # Token type
+ try:
+ self.state = 295
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,22,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 266
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while (((_la) & ~0x3f) == 0 and ((1 << _la) & 2313724308561592320) != 0):
+ self.state = 260
+ self.oC_ReadingClause()
+ self.state = 262
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 261
+ self.match(LcypherParser.SP)
+
+
+ self.state = 268
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 269
+ self.oC_Return()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 276
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while (((_la) & ~0x3f) == 0 and ((1 << _la) & 2313724308561592320) != 0):
+ self.state = 270
+ self.oC_ReadingClause()
+ self.state = 272
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 271
+ self.match(LcypherParser.SP)
+
+
+ self.state = 278
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 279
+ self.oC_UpdatingClause()
+ self.state = 286
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,19,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 281
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 280
+ self.match(LcypherParser.SP)
+
+
+ self.state = 283
+ self.oC_UpdatingClause()
+ self.state = 288
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,19,self._ctx)
+
+ self.state = 293
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,21,self._ctx)
+ if la_ == 1:
+ self.state = 290
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 289
+ self.match(LcypherParser.SP)
+
+
+ self.state = 292
+ self.oC_Return()
+
+
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MultiPartQueryContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SinglePartQuery(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SinglePartQueryContext,0)
+
+
+ def oC_With(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_WithContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_WithContext,i)
+
+
+ def oC_ReadingClause(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ReadingClauseContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ReadingClauseContext,i)
+
+
+ def oC_UpdatingClause(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_UpdatingClauseContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_UpdatingClauseContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_MultiPartQuery
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_MultiPartQuery" ):
+ listener.enterOC_MultiPartQuery(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_MultiPartQuery" ):
+ listener.exitOC_MultiPartQuery(self)
+
+
+
+
+ def oC_MultiPartQuery(self):
+
+ localctx = LcypherParser.OC_MultiPartQueryContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 14, self.RULE_oC_MultiPartQuery)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 319
+ self._errHandler.sync(self)
+ _alt = 1
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt == 1:
+ self.state = 303
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while (((_la) & ~0x3f) == 0 and ((1 << _la) & 2313724308561592320) != 0):
+ self.state = 297
+ self.oC_ReadingClause()
+ self.state = 299
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 298
+ self.match(LcypherParser.SP)
+
+
+ self.state = 305
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 312
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while (((_la) & ~0x3f) == 0 and ((1 << _la) & 2251799813685248000) != 0):
+ self.state = 306
+ self.oC_UpdatingClause()
+ self.state = 308
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 307
+ self.match(LcypherParser.SP)
+
+
+ self.state = 314
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 315
+ self.oC_With()
+ self.state = 317
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 316
+ self.match(LcypherParser.SP)
+
+
+
+ else:
+ raise NoViableAltException(self)
+ self.state = 321
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,28,self._ctx)
+
+ self.state = 323
+ self.oC_SinglePartQuery()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_UpdatingClauseContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Create(self):
+ return self.getTypedRuleContext(LcypherParser.OC_CreateContext,0)
+
+
+ def oC_Merge(self):
+ return self.getTypedRuleContext(LcypherParser.OC_MergeContext,0)
+
+
+ def oC_Delete(self):
+ return self.getTypedRuleContext(LcypherParser.OC_DeleteContext,0)
+
+
+ def oC_Set(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SetContext,0)
+
+
+ def oC_Remove(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RemoveContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_UpdatingClause
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_UpdatingClause" ):
+ listener.enterOC_UpdatingClause(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_UpdatingClause" ):
+ listener.exitOC_UpdatingClause(self)
+
+
+
+
+ def oC_UpdatingClause(self):
+
+ localctx = LcypherParser.OC_UpdatingClauseContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 16, self.RULE_oC_UpdatingClause)
+ try:
+ self.state = 330
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [56]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 325
+ self.oC_Create()
+ pass
+ elif token in [54]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 326
+ self.oC_Merge()
+ pass
+ elif token in [58, 59]:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 327
+ self.oC_Delete()
+ pass
+ elif token in [57]:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 328
+ self.oC_Set()
+ pass
+ elif token in [60]:
+ self.enterOuterAlt(localctx, 5)
+ self.state = 329
+ self.oC_Remove()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReadingClauseContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Match(self):
+ return self.getTypedRuleContext(LcypherParser.OC_MatchContext,0)
+
+
+ def oC_Unwind(self):
+ return self.getTypedRuleContext(LcypherParser.OC_UnwindContext,0)
+
+
+ def oC_InQueryCall(self):
+ return self.getTypedRuleContext(LcypherParser.OC_InQueryCallContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ReadingClause
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ReadingClause" ):
+ listener.enterOC_ReadingClause(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ReadingClause" ):
+ listener.exitOC_ReadingClause(self)
+
+
+
+
+ def oC_ReadingClause(self):
+
+ localctx = LcypherParser.OC_ReadingClauseContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 18, self.RULE_oC_ReadingClause)
+ try:
+ self.state = 335
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [50, 51]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 332
+ self.oC_Match()
+ pass
+ elif token in [52]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 333
+ self.oC_Unwind()
+ pass
+ elif token in [61]:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 334
+ self.oC_InQueryCall()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MatchContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def MATCH(self):
+ return self.getToken(LcypherParser.MATCH, 0)
+
+ def oC_Pattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternContext,0)
+
+
+ def OPTIONAL_(self):
+ return self.getToken(LcypherParser.OPTIONAL_, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Hint(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_HintContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_HintContext,i)
+
+
+ def oC_Where(self):
+ return self.getTypedRuleContext(LcypherParser.OC_WhereContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Match
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Match" ):
+ listener.enterOC_Match(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Match" ):
+ listener.exitOC_Match(self)
+
+
+
+
+ def oC_Match(self):
+
+ localctx = LcypherParser.OC_MatchContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 20, self.RULE_oC_Match)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 339
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==50:
+ self.state = 337
+ self.match(LcypherParser.OPTIONAL_)
+ self.state = 338
+ self.match(LcypherParser.SP)
+
+
+ self.state = 341
+ self.match(LcypherParser.MATCH)
+ self.state = 343
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 342
+ self.match(LcypherParser.SP)
+
+
+ self.state = 345
+ self.oC_Pattern()
+ self.state = 352
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,34,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 347
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 346
+ self.match(LcypherParser.SP)
+
+
+ self.state = 349
+ self.oC_Hint()
+ self.state = 354
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,34,self._ctx)
+
+ self.state = 359
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,36,self._ctx)
+ if la_ == 1:
+ self.state = 356
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 355
+ self.match(LcypherParser.SP)
+
+
+ self.state = 358
+ self.oC_Where()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_UnwindContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def UNWIND(self):
+ return self.getToken(LcypherParser.UNWIND, 0)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def AS(self):
+ return self.getToken(LcypherParser.AS, 0)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Unwind
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Unwind" ):
+ listener.enterOC_Unwind(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Unwind" ):
+ listener.exitOC_Unwind(self)
+
+
+
+
+ def oC_Unwind(self):
+
+ localctx = LcypherParser.OC_UnwindContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 22, self.RULE_oC_Unwind)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 361
+ self.match(LcypherParser.UNWIND)
+ self.state = 363
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 362
+ self.match(LcypherParser.SP)
+
+
+ self.state = 365
+ self.oC_Expression()
+ self.state = 366
+ self.match(LcypherParser.SP)
+ self.state = 367
+ self.match(LcypherParser.AS)
+ self.state = 368
+ self.match(LcypherParser.SP)
+ self.state = 369
+ self.oC_Variable()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MergeContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def MERGE(self):
+ return self.getToken(LcypherParser.MERGE, 0)
+
+ def oC_PatternPart(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternPartContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_MergeAction(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_MergeActionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_MergeActionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Merge
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Merge" ):
+ listener.enterOC_Merge(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Merge" ):
+ listener.exitOC_Merge(self)
+
+
+
+
+ def oC_Merge(self):
+
+ localctx = LcypherParser.OC_MergeContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 24, self.RULE_oC_Merge)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 371
+ self.match(LcypherParser.MERGE)
+ self.state = 373
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 372
+ self.match(LcypherParser.SP)
+
+
+ self.state = 375
+ self.oC_PatternPart()
+ self.state = 380
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,39,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 376
+ self.match(LcypherParser.SP)
+ self.state = 377
+ self.oC_MergeAction()
+ self.state = 382
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,39,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MergeActionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def ON(self):
+ return self.getToken(LcypherParser.ON, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def MATCH(self):
+ return self.getToken(LcypherParser.MATCH, 0)
+
+ def oC_Set(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SetContext,0)
+
+
+ def CREATE(self):
+ return self.getToken(LcypherParser.CREATE, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_MergeAction
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_MergeAction" ):
+ listener.enterOC_MergeAction(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_MergeAction" ):
+ listener.exitOC_MergeAction(self)
+
+
+
+
+ def oC_MergeAction(self):
+
+ localctx = LcypherParser.OC_MergeActionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 26, self.RULE_oC_MergeAction)
+ try:
+ self.state = 393
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,40,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 383
+ self.match(LcypherParser.ON)
+ self.state = 384
+ self.match(LcypherParser.SP)
+ self.state = 385
+ self.match(LcypherParser.MATCH)
+ self.state = 386
+ self.match(LcypherParser.SP)
+ self.state = 387
+ self.oC_Set()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 388
+ self.match(LcypherParser.ON)
+ self.state = 389
+ self.match(LcypherParser.SP)
+ self.state = 390
+ self.match(LcypherParser.CREATE)
+ self.state = 391
+ self.match(LcypherParser.SP)
+ self.state = 392
+ self.oC_Set()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_CreateContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def CREATE(self):
+ return self.getToken(LcypherParser.CREATE, 0)
+
+ def oC_Pattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Create
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Create" ):
+ listener.enterOC_Create(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Create" ):
+ listener.exitOC_Create(self)
+
+
+
+
+ def oC_Create(self):
+
+ localctx = LcypherParser.OC_CreateContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 28, self.RULE_oC_Create)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 395
+ self.match(LcypherParser.CREATE)
+ self.state = 397
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 396
+ self.match(LcypherParser.SP)
+
+
+ self.state = 399
+ self.oC_Pattern()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SetContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SET(self):
+ return self.getToken(LcypherParser.SET, 0)
+
+ def oC_SetItem(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_SetItemContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_SetItemContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Set
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Set" ):
+ listener.enterOC_Set(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Set" ):
+ listener.exitOC_Set(self)
+
+
+
+
+ def oC_Set(self):
+
+ localctx = LcypherParser.OC_SetContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 30, self.RULE_oC_Set)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 401
+ self.match(LcypherParser.SET)
+ self.state = 403
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 402
+ self.match(LcypherParser.SP)
+
+
+ self.state = 405
+ self.oC_SetItem()
+ self.state = 416
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,45,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 407
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 406
+ self.match(LcypherParser.SP)
+
+
+ self.state = 409
+ self.match(LcypherParser.T__1)
+ self.state = 411
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 410
+ self.match(LcypherParser.SP)
+
+
+ self.state = 413
+ self.oC_SetItem()
+ self.state = 418
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,45,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SetItemContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PropertyExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyExpressionContext,0)
+
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_NodeLabels(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodeLabelsContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SetItem
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SetItem" ):
+ listener.enterOC_SetItem(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SetItem" ):
+ listener.exitOC_SetItem(self)
+
+
+
+
+ def oC_SetItem(self):
+
+ localctx = LcypherParser.OC_SetItemContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 32, self.RULE_oC_SetItem)
+ self._la = 0 # Token type
+ try:
+ self.state = 455
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,53,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 419
+ self.oC_PropertyExpression()
+ self.state = 421
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 420
+ self.match(LcypherParser.SP)
+
+
+ self.state = 423
+ self.match(LcypherParser.T__2)
+ self.state = 425
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 424
+ self.match(LcypherParser.SP)
+
+
+ self.state = 427
+ self.oC_Expression()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 429
+ self.oC_Variable()
+ self.state = 431
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 430
+ self.match(LcypherParser.SP)
+
+
+ self.state = 433
+ self.match(LcypherParser.T__2)
+ self.state = 435
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 434
+ self.match(LcypherParser.SP)
+
+
+ self.state = 437
+ self.oC_Expression()
+ pass
+
+ elif la_ == 3:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 439
+ self.oC_Variable()
+ self.state = 441
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 440
+ self.match(LcypherParser.SP)
+
+
+ self.state = 443
+ self.match(LcypherParser.T__3)
+ self.state = 445
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 444
+ self.match(LcypherParser.SP)
+
+
+ self.state = 447
+ self.oC_Expression()
+ pass
+
+ elif la_ == 4:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 449
+ self.oC_Variable()
+ self.state = 451
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 450
+ self.match(LcypherParser.SP)
+
+
+ self.state = 453
+ self.oC_NodeLabels()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_DeleteContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def DELETE_(self):
+ return self.getToken(LcypherParser.DELETE_, 0)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def DETACH(self):
+ return self.getToken(LcypherParser.DETACH, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Delete
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Delete" ):
+ listener.enterOC_Delete(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Delete" ):
+ listener.exitOC_Delete(self)
+
+
+
+
+ def oC_Delete(self):
+
+ localctx = LcypherParser.OC_DeleteContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 34, self.RULE_oC_Delete)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 459
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==58:
+ self.state = 457
+ self.match(LcypherParser.DETACH)
+ self.state = 458
+ self.match(LcypherParser.SP)
+
+
+ self.state = 461
+ self.match(LcypherParser.DELETE_)
+ self.state = 463
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 462
+ self.match(LcypherParser.SP)
+
+
+ self.state = 465
+ self.oC_Expression()
+ self.state = 476
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 467
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 466
+ self.match(LcypherParser.SP)
+
+
+ self.state = 469
+ self.match(LcypherParser.T__1)
+ self.state = 471
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 470
+ self.match(LcypherParser.SP)
+
+
+ self.state = 473
+ self.oC_Expression()
+ self.state = 478
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RemoveContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def REMOVE(self):
+ return self.getToken(LcypherParser.REMOVE, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_RemoveItem(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_RemoveItemContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_RemoveItemContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Remove
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Remove" ):
+ listener.enterOC_Remove(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Remove" ):
+ listener.exitOC_Remove(self)
+
+
+
+
+ def oC_Remove(self):
+
+ localctx = LcypherParser.OC_RemoveContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 36, self.RULE_oC_Remove)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 479
+ self.match(LcypherParser.REMOVE)
+ self.state = 480
+ self.match(LcypherParser.SP)
+ self.state = 481
+ self.oC_RemoveItem()
+ self.state = 492
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,61,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 483
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 482
+ self.match(LcypherParser.SP)
+
+
+ self.state = 485
+ self.match(LcypherParser.T__1)
+ self.state = 487
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 486
+ self.match(LcypherParser.SP)
+
+
+ self.state = 489
+ self.oC_RemoveItem()
+ self.state = 494
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,61,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RemoveItemContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_NodeLabels(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodeLabelsContext,0)
+
+
+ def oC_PropertyExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RemoveItem
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RemoveItem" ):
+ listener.enterOC_RemoveItem(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RemoveItem" ):
+ listener.exitOC_RemoveItem(self)
+
+
+
+
+ def oC_RemoveItem(self):
+
+ localctx = LcypherParser.OC_RemoveItemContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 38, self.RULE_oC_RemoveItem)
+ try:
+ self.state = 499
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,62,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 495
+ self.oC_Variable()
+ self.state = 496
+ self.oC_NodeLabels()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 498
+ self.oC_PropertyExpression()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_InQueryCallContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def CALL(self):
+ return self.getToken(LcypherParser.CALL, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_ExplicitProcedureInvocation(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExplicitProcedureInvocationContext,0)
+
+
+ def YIELD(self):
+ return self.getToken(LcypherParser.YIELD, 0)
+
+ def oC_YieldItems(self):
+ return self.getTypedRuleContext(LcypherParser.OC_YieldItemsContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_InQueryCall
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_InQueryCall" ):
+ listener.enterOC_InQueryCall(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_InQueryCall" ):
+ listener.exitOC_InQueryCall(self)
+
+
+
+
+ def oC_InQueryCall(self):
+
+ localctx = LcypherParser.OC_InQueryCallContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 40, self.RULE_oC_InQueryCall)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 501
+ self.match(LcypherParser.CALL)
+ self.state = 502
+ self.match(LcypherParser.SP)
+ self.state = 503
+ self.oC_ExplicitProcedureInvocation()
+ self.state = 510
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,64,self._ctx)
+ if la_ == 1:
+ self.state = 505
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 504
+ self.match(LcypherParser.SP)
+
+
+ self.state = 507
+ self.match(LcypherParser.YIELD)
+ self.state = 508
+ self.match(LcypherParser.SP)
+ self.state = 509
+ self.oC_YieldItems()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_StandaloneCallContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def CALL(self):
+ return self.getToken(LcypherParser.CALL, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_ExplicitProcedureInvocation(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExplicitProcedureInvocationContext,0)
+
+
+ def oC_ImplicitProcedureInvocation(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ImplicitProcedureInvocationContext,0)
+
+
+ def YIELD(self):
+ return self.getToken(LcypherParser.YIELD, 0)
+
+ def oC_YieldItems(self):
+ return self.getTypedRuleContext(LcypherParser.OC_YieldItemsContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_StandaloneCall
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_StandaloneCall" ):
+ listener.enterOC_StandaloneCall(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_StandaloneCall" ):
+ listener.exitOC_StandaloneCall(self)
+
+
+
+
+ def oC_StandaloneCall(self):
+
+ localctx = LcypherParser.OC_StandaloneCallContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 42, self.RULE_oC_StandaloneCall)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 512
+ self.match(LcypherParser.CALL)
+ self.state = 513
+ self.match(LcypherParser.SP)
+ self.state = 516
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,65,self._ctx)
+ if la_ == 1:
+ self.state = 514
+ self.oC_ExplicitProcedureInvocation()
+ pass
+
+ elif la_ == 2:
+ self.state = 515
+ self.oC_ImplicitProcedureInvocation()
+ pass
+
+
+ self.state = 522
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,66,self._ctx)
+ if la_ == 1:
+ self.state = 518
+ self.match(LcypherParser.SP)
+ self.state = 519
+ self.match(LcypherParser.YIELD)
+ self.state = 520
+ self.match(LcypherParser.SP)
+ self.state = 521
+ self.oC_YieldItems()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_YieldItemsContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Where(self):
+ return self.getTypedRuleContext(LcypherParser.OC_WhereContext,0)
+
+
+ def oC_YieldItem(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_YieldItemContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_YieldItemContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_YieldItems
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_YieldItems" ):
+ listener.enterOC_YieldItems(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_YieldItems" ):
+ listener.exitOC_YieldItems(self)
+
+
+
+
+ def oC_YieldItems(self):
+
+ localctx = LcypherParser.OC_YieldItemsContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 44, self.RULE_oC_YieldItems)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 539
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [5]:
+ self.state = 524
+ self.match(LcypherParser.T__4)
+ pass
+ elif token in [89, 90, 91, 92, 106, 115, 116, 117, 130]:
+ self.state = 525
+ self.oC_YieldItem()
+ self.state = 536
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,69,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 527
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 526
+ self.match(LcypherParser.SP)
+
+
+ self.state = 529
+ self.match(LcypherParser.T__1)
+ self.state = 531
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 530
+ self.match(LcypherParser.SP)
+
+
+ self.state = 533
+ self.oC_YieldItem()
+ self.state = 538
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,69,self._ctx)
+
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ self.state = 545
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,72,self._ctx)
+ if la_ == 1:
+ self.state = 542
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 541
+ self.match(LcypherParser.SP)
+
+
+ self.state = 544
+ self.oC_Where()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_YieldItemContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_ProcedureResultField(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ProcedureResultFieldContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def AS(self):
+ return self.getToken(LcypherParser.AS, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_YieldItem
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_YieldItem" ):
+ listener.enterOC_YieldItem(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_YieldItem" ):
+ listener.exitOC_YieldItem(self)
+
+
+
+
+ def oC_YieldItem(self):
+
+ localctx = LcypherParser.OC_YieldItemContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 46, self.RULE_oC_YieldItem)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 552
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,73,self._ctx)
+ if la_ == 1:
+ self.state = 547
+ self.oC_ProcedureResultField()
+ self.state = 548
+ self.match(LcypherParser.SP)
+ self.state = 549
+ self.match(LcypherParser.AS)
+ self.state = 550
+ self.match(LcypherParser.SP)
+
+
+ self.state = 554
+ self.oC_Variable()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_WithContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def WITH(self):
+ return self.getToken(LcypherParser.WITH, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_ReturnBody(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ReturnBodyContext,0)
+
+
+ def DISTINCT(self):
+ return self.getToken(LcypherParser.DISTINCT, 0)
+
+ def oC_Where(self):
+ return self.getTypedRuleContext(LcypherParser.OC_WhereContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_With
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_With" ):
+ listener.enterOC_With(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_With" ):
+ listener.exitOC_With(self)
+
+
+
+
+ def oC_With(self):
+
+ localctx = LcypherParser.OC_WithContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 48, self.RULE_oC_With)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 556
+ self.match(LcypherParser.WITH)
+ self.state = 561
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,75,self._ctx)
+ if la_ == 1:
+ self.state = 558
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 557
+ self.match(LcypherParser.SP)
+
+
+ self.state = 560
+ self.match(LcypherParser.DISTINCT)
+
+
+ self.state = 563
+ self.match(LcypherParser.SP)
+ self.state = 564
+ self.oC_ReturnBody()
+ self.state = 569
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,77,self._ctx)
+ if la_ == 1:
+ self.state = 566
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 565
+ self.match(LcypherParser.SP)
+
+
+ self.state = 568
+ self.oC_Where()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReturnContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def RETURN(self):
+ return self.getToken(LcypherParser.RETURN, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_ReturnBody(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ReturnBodyContext,0)
+
+
+ def DISTINCT(self):
+ return self.getToken(LcypherParser.DISTINCT, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Return
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Return" ):
+ listener.enterOC_Return(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Return" ):
+ listener.exitOC_Return(self)
+
+
+
+
+ def oC_Return(self):
+
+ localctx = LcypherParser.OC_ReturnContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 50, self.RULE_oC_Return)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 571
+ self.match(LcypherParser.RETURN)
+ self.state = 576
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,79,self._ctx)
+ if la_ == 1:
+ self.state = 573
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 572
+ self.match(LcypherParser.SP)
+
+
+ self.state = 575
+ self.match(LcypherParser.DISTINCT)
+
+
+ self.state = 578
+ self.match(LcypherParser.SP)
+ self.state = 579
+ self.oC_ReturnBody()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReturnBodyContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_ReturnItems(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ReturnItemsContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Order(self):
+ return self.getTypedRuleContext(LcypherParser.OC_OrderContext,0)
+
+
+ def oC_Skip(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SkipContext,0)
+
+
+ def oC_Limit(self):
+ return self.getTypedRuleContext(LcypherParser.OC_LimitContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ReturnBody
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ReturnBody" ):
+ listener.enterOC_ReturnBody(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ReturnBody" ):
+ listener.exitOC_ReturnBody(self)
+
+
+
+
+ def oC_ReturnBody(self):
+
+ localctx = LcypherParser.OC_ReturnBodyContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 52, self.RULE_oC_ReturnBody)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 581
+ self.oC_ReturnItems()
+ self.state = 584
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,80,self._ctx)
+ if la_ == 1:
+ self.state = 582
+ self.match(LcypherParser.SP)
+ self.state = 583
+ self.oC_Order()
+
+
+ self.state = 588
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,81,self._ctx)
+ if la_ == 1:
+ self.state = 586
+ self.match(LcypherParser.SP)
+ self.state = 587
+ self.oC_Skip()
+
+
+ self.state = 592
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,82,self._ctx)
+ if la_ == 1:
+ self.state = 590
+ self.match(LcypherParser.SP)
+ self.state = 591
+ self.oC_Limit()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReturnItemsContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_ReturnItem(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ReturnItemContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ReturnItemContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ReturnItems
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ReturnItems" ):
+ listener.enterOC_ReturnItems(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ReturnItems" ):
+ listener.exitOC_ReturnItems(self)
+
+
+
+
+ def oC_ReturnItems(self):
+
+ localctx = LcypherParser.OC_ReturnItemsContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 54, self.RULE_oC_ReturnItems)
+ self._la = 0 # Token type
+ try:
+ self.state = 622
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [5]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 594
+ self.match(LcypherParser.T__4)
+ self.state = 605
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,85,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 596
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 595
+ self.match(LcypherParser.SP)
+
+
+ self.state = 598
+ self.match(LcypherParser.T__1)
+ self.state = 600
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 599
+ self.match(LcypherParser.SP)
+
+
+ self.state = 602
+ self.oC_ReturnItem()
+ self.state = 607
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,85,self._ctx)
+
+ pass
+ elif token in [6, 8, 13, 14, 24, 26, 49, 81, 88, 89, 90, 91, 92, 93, 94, 95, 96, 101, 103, 104, 105, 106, 113, 114, 115, 116, 117, 130]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 608
+ self.oC_ReturnItem()
+ self.state = 619
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,88,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 610
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 609
+ self.match(LcypherParser.SP)
+
+
+ self.state = 612
+ self.match(LcypherParser.T__1)
+ self.state = 614
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 613
+ self.match(LcypherParser.SP)
+
+
+ self.state = 616
+ self.oC_ReturnItem()
+ self.state = 621
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,88,self._ctx)
+
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReturnItemContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def AS(self):
+ return self.getToken(LcypherParser.AS, 0)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ReturnItem
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ReturnItem" ):
+ listener.enterOC_ReturnItem(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ReturnItem" ):
+ listener.exitOC_ReturnItem(self)
+
+
+
+
+ def oC_ReturnItem(self):
+
+ localctx = LcypherParser.OC_ReturnItemContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 56, self.RULE_oC_ReturnItem)
+ try:
+ self.state = 631
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,90,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 624
+ self.oC_Expression()
+ self.state = 625
+ self.match(LcypherParser.SP)
+ self.state = 626
+ self.match(LcypherParser.AS)
+ self.state = 627
+ self.match(LcypherParser.SP)
+ self.state = 628
+ self.oC_Variable()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 630
+ self.oC_Expression()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_OrderContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def ORDER(self):
+ return self.getToken(LcypherParser.ORDER, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def BY(self):
+ return self.getToken(LcypherParser.BY, 0)
+
+ def oC_SortItem(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_SortItemContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_SortItemContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Order
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Order" ):
+ listener.enterOC_Order(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Order" ):
+ listener.exitOC_Order(self)
+
+
+
+
+ def oC_Order(self):
+
+ localctx = LcypherParser.OC_OrderContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 58, self.RULE_oC_Order)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 633
+ self.match(LcypherParser.ORDER)
+ self.state = 634
+ self.match(LcypherParser.SP)
+ self.state = 635
+ self.match(LcypherParser.BY)
+ self.state = 636
+ self.match(LcypherParser.SP)
+ self.state = 637
+ self.oC_SortItem()
+ self.state = 645
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==2:
+ self.state = 638
+ self.match(LcypherParser.T__1)
+ self.state = 640
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 639
+ self.match(LcypherParser.SP)
+
+
+ self.state = 642
+ self.oC_SortItem()
+ self.state = 647
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SkipContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def L_SKIP(self):
+ return self.getToken(LcypherParser.L_SKIP, 0)
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Skip
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Skip" ):
+ listener.enterOC_Skip(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Skip" ):
+ listener.exitOC_Skip(self)
+
+
+
+
+ def oC_Skip(self):
+
+ localctx = LcypherParser.OC_SkipContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 60, self.RULE_oC_Skip)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 648
+ self.match(LcypherParser.L_SKIP)
+ self.state = 649
+ self.match(LcypherParser.SP)
+ self.state = 650
+ self.oC_Expression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_LimitContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def LIMIT(self):
+ return self.getToken(LcypherParser.LIMIT, 0)
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Limit
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Limit" ):
+ listener.enterOC_Limit(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Limit" ):
+ listener.exitOC_Limit(self)
+
+
+
+
+ def oC_Limit(self):
+
+ localctx = LcypherParser.OC_LimitContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 62, self.RULE_oC_Limit)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 652
+ self.match(LcypherParser.LIMIT)
+ self.state = 653
+ self.match(LcypherParser.SP)
+ self.state = 654
+ self.oC_Expression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SortItemContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def ASCENDING(self):
+ return self.getToken(LcypherParser.ASCENDING, 0)
+
+ def ASC(self):
+ return self.getToken(LcypherParser.ASC, 0)
+
+ def DESCENDING(self):
+ return self.getToken(LcypherParser.DESCENDING, 0)
+
+ def DESC(self):
+ return self.getToken(LcypherParser.DESC, 0)
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SortItem
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SortItem" ):
+ listener.enterOC_SortItem(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SortItem" ):
+ listener.exitOC_SortItem(self)
+
+
+
+
+ def oC_SortItem(self):
+
+ localctx = LcypherParser.OC_SortItemContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 64, self.RULE_oC_SortItem)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 656
+ self.oC_Expression()
+ self.state = 661
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,94,self._ctx)
+ if la_ == 1:
+ self.state = 658
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 657
+ self.match(LcypherParser.SP)
+
+
+ self.state = 660
+ _la = self._input.LA(1)
+ if not(((((_la - 70)) & ~0x3f) == 0 and ((1 << (_la - 70)) & 15) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_HintContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def USING(self):
+ return self.getToken(LcypherParser.USING, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def JOIN(self):
+ return self.getToken(LcypherParser.JOIN, 0)
+
+ def ON(self):
+ return self.getToken(LcypherParser.ON, 0)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def START(self):
+ return self.getToken(LcypherParser.START, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Hint
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Hint" ):
+ listener.enterOC_Hint(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Hint" ):
+ listener.exitOC_Hint(self)
+
+
+
+
+ def oC_Hint(self):
+
+ localctx = LcypherParser.OC_HintContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 66, self.RULE_oC_Hint)
+ try:
+ self.state = 677
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,95,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 663
+ self.match(LcypherParser.USING)
+ self.state = 664
+ self.match(LcypherParser.SP)
+ self.state = 665
+ self.match(LcypherParser.JOIN)
+ self.state = 666
+ self.match(LcypherParser.SP)
+ self.state = 667
+ self.match(LcypherParser.ON)
+ self.state = 668
+ self.match(LcypherParser.SP)
+ self.state = 669
+ self.oC_Variable()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 670
+ self.match(LcypherParser.USING)
+ self.state = 671
+ self.match(LcypherParser.SP)
+ self.state = 672
+ self.match(LcypherParser.START)
+ self.state = 673
+ self.match(LcypherParser.SP)
+ self.state = 674
+ self.match(LcypherParser.ON)
+ self.state = 675
+ self.match(LcypherParser.SP)
+ self.state = 676
+ self.oC_Variable()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_WhereContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def WHERE(self):
+ return self.getToken(LcypherParser.WHERE, 0)
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Where
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Where" ):
+ listener.enterOC_Where(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Where" ):
+ listener.exitOC_Where(self)
+
+
+
+
+ def oC_Where(self):
+
+ localctx = LcypherParser.OC_WhereContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 68, self.RULE_oC_Where)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 679
+ self.match(LcypherParser.WHERE)
+ self.state = 680
+ self.match(LcypherParser.SP)
+ self.state = 681
+ self.oC_Expression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PatternContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PatternPart(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PatternPartContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PatternPartContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Pattern
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Pattern" ):
+ listener.enterOC_Pattern(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Pattern" ):
+ listener.exitOC_Pattern(self)
+
+
+
+
+ def oC_Pattern(self):
+
+ localctx = LcypherParser.OC_PatternContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 70, self.RULE_oC_Pattern)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 683
+ self.oC_PatternPart()
+ self.state = 694
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,98,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 685
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 684
+ self.match(LcypherParser.SP)
+
+
+ self.state = 687
+ self.match(LcypherParser.T__1)
+ self.state = 689
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 688
+ self.match(LcypherParser.SP)
+
+
+ self.state = 691
+ self.oC_PatternPart()
+ self.state = 696
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,98,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PatternPartContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_AnonymousPatternPart(self):
+ return self.getTypedRuleContext(LcypherParser.OC_AnonymousPatternPartContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PatternPart
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PatternPart" ):
+ listener.enterOC_PatternPart(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PatternPart" ):
+ listener.exitOC_PatternPart(self)
+
+
+
+
+ def oC_PatternPart(self):
+
+ localctx = LcypherParser.OC_PatternPartContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 72, self.RULE_oC_PatternPart)
+ self._la = 0 # Token type
+ try:
+ self.state = 708
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [89, 90, 91, 92, 106, 115, 116, 117, 130]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 697
+ self.oC_Variable()
+ self.state = 699
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 698
+ self.match(LcypherParser.SP)
+
+
+ self.state = 701
+ self.match(LcypherParser.T__2)
+ self.state = 703
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 702
+ self.match(LcypherParser.SP)
+
+
+ self.state = 705
+ self.oC_AnonymousPatternPart()
+ pass
+ elif token in [6]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 707
+ self.oC_AnonymousPatternPart()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_AnonymousPatternPartContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PatternElement(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternElementContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_AnonymousPatternPart
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_AnonymousPatternPart" ):
+ listener.enterOC_AnonymousPatternPart(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_AnonymousPatternPart" ):
+ listener.exitOC_AnonymousPatternPart(self)
+
+
+
+
+ def oC_AnonymousPatternPart(self):
+
+ localctx = LcypherParser.OC_AnonymousPatternPartContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 74, self.RULE_oC_AnonymousPatternPart)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 710
+ self.oC_PatternElement()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PatternElementContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_NodePattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodePatternContext,0)
+
+
+ def oC_PatternElementChain(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PatternElementChainContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PatternElementChainContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_PatternElement(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternElementContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PatternElement
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PatternElement" ):
+ listener.enterOC_PatternElement(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PatternElement" ):
+ listener.exitOC_PatternElement(self)
+
+
+
+
+ def oC_PatternElement(self):
+
+ localctx = LcypherParser.OC_PatternElementContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 76, self.RULE_oC_PatternElement)
+ self._la = 0 # Token type
+ try:
+ self.state = 726
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,104,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 712
+ self.oC_NodePattern()
+ self.state = 719
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,103,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 714
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 713
+ self.match(LcypherParser.SP)
+
+
+ self.state = 716
+ self.oC_PatternElementChain()
+ self.state = 721
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,103,self._ctx)
+
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 722
+ self.match(LcypherParser.T__5)
+ self.state = 723
+ self.oC_PatternElement()
+ self.state = 724
+ self.match(LcypherParser.T__6)
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NodePatternContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_NodeLabels(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodeLabelsContext,0)
+
+
+ def oC_Properties(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertiesContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NodePattern
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NodePattern" ):
+ listener.enterOC_NodePattern(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NodePattern" ):
+ listener.exitOC_NodePattern(self)
+
+
+
+
+ def oC_NodePattern(self):
+
+ localctx = LcypherParser.OC_NodePatternContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 78, self.RULE_oC_NodePattern)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 728
+ self.match(LcypherParser.T__5)
+ self.state = 730
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 729
+ self.match(LcypherParser.SP)
+
+
+ self.state = 736
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 89)) & ~0x3f) == 0 and ((1 << (_la - 89)) & 2199493148687) != 0):
+ self.state = 732
+ self.oC_Variable()
+ self.state = 734
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 733
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 742
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==10:
+ self.state = 738
+ self.oC_NodeLabels()
+ self.state = 740
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 739
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 748
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==24 or _la==26:
+ self.state = 744
+ self.oC_Properties()
+ self.state = 746
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 745
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 750
+ self.match(LcypherParser.T__6)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PatternElementChainContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_RelationshipPattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RelationshipPatternContext,0)
+
+
+ def oC_NodePattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodePatternContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PatternElementChain
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PatternElementChain" ):
+ listener.enterOC_PatternElementChain(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PatternElementChain" ):
+ listener.exitOC_PatternElementChain(self)
+
+
+
+
+ def oC_PatternElementChain(self):
+
+ localctx = LcypherParser.OC_PatternElementChainContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 80, self.RULE_oC_PatternElementChain)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 752
+ self.oC_RelationshipPattern()
+ self.state = 754
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 753
+ self.match(LcypherParser.SP)
+
+
+ self.state = 756
+ self.oC_NodePattern()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RelationshipPatternContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_LeftArrowHead(self):
+ return self.getTypedRuleContext(LcypherParser.OC_LeftArrowHeadContext,0)
+
+
+ def oC_Dash(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_DashContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_DashContext,i)
+
+
+ def oC_RightArrowHead(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RightArrowHeadContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_RelationshipDetail(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RelationshipDetailContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RelationshipPattern
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RelationshipPattern" ):
+ listener.enterOC_RelationshipPattern(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RelationshipPattern" ):
+ listener.exitOC_RelationshipPattern(self)
+
+
+
+
+ def oC_RelationshipPattern(self):
+
+ localctx = LcypherParser.OC_RelationshipPatternContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 82, self.RULE_oC_RelationshipPattern)
+ self._la = 0 # Token type
+ try:
+ self.state = 822
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,129,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 758
+ self.oC_LeftArrowHead()
+ self.state = 760
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 759
+ self.match(LcypherParser.SP)
+
+
+ self.state = 762
+ self.oC_Dash()
+ self.state = 764
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,114,self._ctx)
+ if la_ == 1:
+ self.state = 763
+ self.match(LcypherParser.SP)
+
+
+ self.state = 767
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==8:
+ self.state = 766
+ self.oC_RelationshipDetail()
+
+
+ self.state = 770
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 769
+ self.match(LcypherParser.SP)
+
+
+ self.state = 772
+ self.oC_Dash()
+ self.state = 774
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 773
+ self.match(LcypherParser.SP)
+
+
+ self.state = 776
+ self.oC_RightArrowHead()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 778
+ self.oC_LeftArrowHead()
+ self.state = 780
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 779
+ self.match(LcypherParser.SP)
+
+
+ self.state = 782
+ self.oC_Dash()
+ self.state = 784
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,119,self._ctx)
+ if la_ == 1:
+ self.state = 783
+ self.match(LcypherParser.SP)
+
+
+ self.state = 787
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==8:
+ self.state = 786
+ self.oC_RelationshipDetail()
+
+
+ self.state = 790
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 789
+ self.match(LcypherParser.SP)
+
+
+ self.state = 792
+ self.oC_Dash()
+ pass
+
+ elif la_ == 3:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 794
+ self.oC_Dash()
+ self.state = 796
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,122,self._ctx)
+ if la_ == 1:
+ self.state = 795
+ self.match(LcypherParser.SP)
+
+
+ self.state = 799
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==8:
+ self.state = 798
+ self.oC_RelationshipDetail()
+
+
+ self.state = 802
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 801
+ self.match(LcypherParser.SP)
+
+
+ self.state = 804
+ self.oC_Dash()
+ self.state = 806
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 805
+ self.match(LcypherParser.SP)
+
+
+ self.state = 808
+ self.oC_RightArrowHead()
+ pass
+
+ elif la_ == 4:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 810
+ self.oC_Dash()
+ self.state = 812
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,126,self._ctx)
+ if la_ == 1:
+ self.state = 811
+ self.match(LcypherParser.SP)
+
+
+ self.state = 815
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==8:
+ self.state = 814
+ self.oC_RelationshipDetail()
+
+
+ self.state = 818
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 817
+ self.match(LcypherParser.SP)
+
+
+ self.state = 820
+ self.oC_Dash()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RelationshipDetailContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def oC_RelationshipTypes(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RelationshipTypesContext,0)
+
+
+ def oC_RangeLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RangeLiteralContext,0)
+
+
+ def oC_Properties(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertiesContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RelationshipDetail
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RelationshipDetail" ):
+ listener.enterOC_RelationshipDetail(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RelationshipDetail" ):
+ listener.exitOC_RelationshipDetail(self)
+
+
+
+
+ def oC_RelationshipDetail(self):
+
+ localctx = LcypherParser.OC_RelationshipDetailContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 84, self.RULE_oC_RelationshipDetail)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 824
+ self.match(LcypherParser.T__7)
+ self.state = 826
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 825
+ self.match(LcypherParser.SP)
+
+
+ self.state = 832
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 89)) & ~0x3f) == 0 and ((1 << (_la - 89)) & 2199493148687) != 0):
+ self.state = 828
+ self.oC_Variable()
+ self.state = 830
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 829
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 838
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==10:
+ self.state = 834
+ self.oC_RelationshipTypes()
+ self.state = 836
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 835
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 841
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==5:
+ self.state = 840
+ self.oC_RangeLiteral()
+
+
+ self.state = 847
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==24 or _la==26:
+ self.state = 843
+ self.oC_Properties()
+ self.state = 845
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 844
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 849
+ self.match(LcypherParser.T__8)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PropertiesContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_MapLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_MapLiteralContext,0)
+
+
+ def oC_Parameter(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ParameterContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Properties
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Properties" ):
+ listener.enterOC_Properties(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Properties" ):
+ listener.exitOC_Properties(self)
+
+
+
+
+ def oC_Properties(self):
+
+ localctx = LcypherParser.OC_PropertiesContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 86, self.RULE_oC_Properties)
+ try:
+ self.state = 853
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [24]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 851
+ self.oC_MapLiteral()
+ pass
+ elif token in [26]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 852
+ self.oC_Parameter()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RelationshipTypesContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_RelTypeName(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_RelTypeNameContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_RelTypeNameContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RelationshipTypes
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RelationshipTypes" ):
+ listener.enterOC_RelationshipTypes(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RelationshipTypes" ):
+ listener.exitOC_RelationshipTypes(self)
+
+
+
+
+ def oC_RelationshipTypes(self):
+
+ localctx = LcypherParser.OC_RelationshipTypesContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 88, self.RULE_oC_RelationshipTypes)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 855
+ self.match(LcypherParser.T__9)
+ self.state = 857
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 856
+ self.match(LcypherParser.SP)
+
+
+ self.state = 859
+ self.oC_RelTypeName()
+ self.state = 873
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,143,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 861
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 860
+ self.match(LcypherParser.SP)
+
+
+ self.state = 863
+ self.match(LcypherParser.T__10)
+ self.state = 865
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==10:
+ self.state = 864
+ self.match(LcypherParser.T__9)
+
+
+ self.state = 868
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 867
+ self.match(LcypherParser.SP)
+
+
+ self.state = 870
+ self.oC_RelTypeName()
+ self.state = 875
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,143,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NodeLabelsContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_NodeLabel(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_NodeLabelContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_NodeLabelContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NodeLabels
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NodeLabels" ):
+ listener.enterOC_NodeLabels(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NodeLabels" ):
+ listener.exitOC_NodeLabels(self)
+
+
+
+
+ def oC_NodeLabels(self):
+
+ localctx = LcypherParser.OC_NodeLabelsContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 90, self.RULE_oC_NodeLabels)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 876
+ self.oC_NodeLabel()
+ self.state = 883
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,145,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 878
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 877
+ self.match(LcypherParser.SP)
+
+
+ self.state = 880
+ self.oC_NodeLabel()
+ self.state = 885
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,145,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NodeLabelContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_LabelName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_LabelNameContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NodeLabel
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NodeLabel" ):
+ listener.enterOC_NodeLabel(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NodeLabel" ):
+ listener.exitOC_NodeLabel(self)
+
+
+
+
+ def oC_NodeLabel(self):
+
+ localctx = LcypherParser.OC_NodeLabelContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 92, self.RULE_oC_NodeLabel)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 886
+ self.match(LcypherParser.T__9)
+ self.state = 888
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 887
+ self.match(LcypherParser.SP)
+
+
+ self.state = 890
+ self.oC_LabelName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RangeLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_IntegerLiteral(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_IntegerLiteralContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_IntegerLiteralContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RangeLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RangeLiteral" ):
+ listener.enterOC_RangeLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RangeLiteral" ):
+ listener.exitOC_RangeLiteral(self)
+
+
+
+
+ def oC_RangeLiteral(self):
+
+ localctx = LcypherParser.OC_RangeLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 94, self.RULE_oC_RangeLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 892
+ self.match(LcypherParser.T__4)
+ self.state = 894
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 893
+ self.match(LcypherParser.SP)
+
+
+ self.state = 900
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 103)) & ~0x3f) == 0 and ((1 << (_la - 103)) & 7) != 0):
+ self.state = 896
+ self.oC_IntegerLiteral()
+ self.state = 898
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 897
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 912
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==12:
+ self.state = 902
+ self.match(LcypherParser.T__11)
+ self.state = 904
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 903
+ self.match(LcypherParser.SP)
+
+
+ self.state = 910
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 103)) & ~0x3f) == 0 and ((1 << (_la - 103)) & 7) != 0):
+ self.state = 906
+ self.oC_IntegerLiteral()
+ self.state = 908
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 907
+ self.match(LcypherParser.SP)
+
+
+
+
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_LabelNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SchemaName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SchemaNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_LabelName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_LabelName" ):
+ listener.enterOC_LabelName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_LabelName" ):
+ listener.exitOC_LabelName(self)
+
+
+
+
+ def oC_LabelName(self):
+
+ localctx = LcypherParser.OC_LabelNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 96, self.RULE_oC_LabelName)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 914
+ self.oC_SchemaName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RelTypeNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SchemaName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SchemaNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RelTypeName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RelTypeName" ):
+ listener.enterOC_RelTypeName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RelTypeName" ):
+ listener.exitOC_RelTypeName(self)
+
+
+
+
+ def oC_RelTypeName(self):
+
+ localctx = LcypherParser.OC_RelTypeNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 98, self.RULE_oC_RelTypeName)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 916
+ self.oC_SchemaName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_OrExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_OrExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Expression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Expression" ):
+ listener.enterOC_Expression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Expression" ):
+ listener.exitOC_Expression(self)
+
+
+
+
+ def oC_Expression(self):
+
+ localctx = LcypherParser.OC_ExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 100, self.RULE_oC_Expression)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 918
+ self.oC_OrExpression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_OrExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_XorExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_XorExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_XorExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def OR(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.OR)
+ else:
+ return self.getToken(LcypherParser.OR, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_OrExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_OrExpression" ):
+ listener.enterOC_OrExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_OrExpression" ):
+ listener.exitOC_OrExpression(self)
+
+
+
+
+ def oC_OrExpression(self):
+
+ localctx = LcypherParser.OC_OrExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 102, self.RULE_oC_OrExpression)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 920
+ self.oC_XorExpression()
+ self.state = 927
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,154,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 921
+ self.match(LcypherParser.SP)
+ self.state = 922
+ self.match(LcypherParser.OR)
+ self.state = 923
+ self.match(LcypherParser.SP)
+ self.state = 924
+ self.oC_XorExpression()
+ self.state = 929
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,154,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_XorExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_AndExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_AndExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_AndExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def XOR(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.XOR)
+ else:
+ return self.getToken(LcypherParser.XOR, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_XorExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_XorExpression" ):
+ listener.enterOC_XorExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_XorExpression" ):
+ listener.exitOC_XorExpression(self)
+
+
+
+
+ def oC_XorExpression(self):
+
+ localctx = LcypherParser.OC_XorExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 104, self.RULE_oC_XorExpression)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 930
+ self.oC_AndExpression()
+ self.state = 937
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,155,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 931
+ self.match(LcypherParser.SP)
+ self.state = 932
+ self.match(LcypherParser.XOR)
+ self.state = 933
+ self.match(LcypherParser.SP)
+ self.state = 934
+ self.oC_AndExpression()
+ self.state = 939
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,155,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_AndExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_NotExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_NotExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_NotExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def AND(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.AND)
+ else:
+ return self.getToken(LcypherParser.AND, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_AndExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_AndExpression" ):
+ listener.enterOC_AndExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_AndExpression" ):
+ listener.exitOC_AndExpression(self)
+
+
+
+
+ def oC_AndExpression(self):
+
+ localctx = LcypherParser.OC_AndExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 106, self.RULE_oC_AndExpression)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 940
+ self.oC_NotExpression()
+ self.state = 947
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,156,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 941
+ self.match(LcypherParser.SP)
+ self.state = 942
+ self.match(LcypherParser.AND)
+ self.state = 943
+ self.match(LcypherParser.SP)
+ self.state = 944
+ self.oC_NotExpression()
+ self.state = 949
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,156,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NotExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_ComparisonExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ComparisonExpressionContext,0)
+
+
+ def NOT(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.NOT)
+ else:
+ return self.getToken(LcypherParser.NOT, i)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NotExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NotExpression" ):
+ listener.enterOC_NotExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NotExpression" ):
+ listener.exitOC_NotExpression(self)
+
+
+
+
+ def oC_NotExpression(self):
+
+ localctx = LcypherParser.OC_NotExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 108, self.RULE_oC_NotExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 956
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==81:
+ self.state = 950
+ self.match(LcypherParser.NOT)
+ self.state = 952
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 951
+ self.match(LcypherParser.SP)
+
+
+ self.state = 958
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 959
+ self.oC_ComparisonExpression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ComparisonExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_AddOrSubtractExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_AddOrSubtractExpressionContext,0)
+
+
+ def oC_PartialComparisonExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PartialComparisonExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PartialComparisonExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ComparisonExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ComparisonExpression" ):
+ listener.enterOC_ComparisonExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ComparisonExpression" ):
+ listener.exitOC_ComparisonExpression(self)
+
+
+
+
+ def oC_ComparisonExpression(self):
+
+ localctx = LcypherParser.OC_ComparisonExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 110, self.RULE_oC_ComparisonExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 961
+ self.oC_AddOrSubtractExpression()
+ self.state = 968
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,160,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 963
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 962
+ self.match(LcypherParser.SP)
+
+
+ self.state = 965
+ self.oC_PartialComparisonExpression()
+ self.state = 970
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,160,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_AddOrSubtractExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_MultiplyDivideModuloExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_MultiplyDivideModuloExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_MultiplyDivideModuloExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_AddOrSubtractExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_AddOrSubtractExpression" ):
+ listener.enterOC_AddOrSubtractExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_AddOrSubtractExpression" ):
+ listener.exitOC_AddOrSubtractExpression(self)
+
+
+
+
+ def oC_AddOrSubtractExpression(self):
+
+ localctx = LcypherParser.OC_AddOrSubtractExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 112, self.RULE_oC_AddOrSubtractExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 971
+ self.oC_MultiplyDivideModuloExpression()
+ self.state = 990
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,166,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 988
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,165,self._ctx)
+ if la_ == 1:
+ self.state = 973
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 972
+ self.match(LcypherParser.SP)
+
+
+ self.state = 975
+ self.match(LcypherParser.T__12)
+ self.state = 977
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 976
+ self.match(LcypherParser.SP)
+
+
+ self.state = 979
+ self.oC_MultiplyDivideModuloExpression()
+ pass
+
+ elif la_ == 2:
+ self.state = 981
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 980
+ self.match(LcypherParser.SP)
+
+
+ self.state = 983
+ self.match(LcypherParser.T__13)
+ self.state = 985
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 984
+ self.match(LcypherParser.SP)
+
+
+ self.state = 987
+ self.oC_MultiplyDivideModuloExpression()
+ pass
+
+
+ self.state = 992
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,166,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MultiplyDivideModuloExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PowerOfExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PowerOfExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PowerOfExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_MultiplyDivideModuloExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_MultiplyDivideModuloExpression" ):
+ listener.enterOC_MultiplyDivideModuloExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_MultiplyDivideModuloExpression" ):
+ listener.exitOC_MultiplyDivideModuloExpression(self)
+
+
+
+
+ def oC_MultiplyDivideModuloExpression(self):
+
+ localctx = LcypherParser.OC_MultiplyDivideModuloExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 114, self.RULE_oC_MultiplyDivideModuloExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 993
+ self.oC_PowerOfExpression()
+ self.state = 1020
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,174,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 1018
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,173,self._ctx)
+ if la_ == 1:
+ self.state = 995
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 994
+ self.match(LcypherParser.SP)
+
+
+ self.state = 997
+ self.match(LcypherParser.T__4)
+ self.state = 999
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 998
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1001
+ self.oC_PowerOfExpression()
+ pass
+
+ elif la_ == 2:
+ self.state = 1003
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1002
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1005
+ self.match(LcypherParser.T__14)
+ self.state = 1007
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1006
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1009
+ self.oC_PowerOfExpression()
+ pass
+
+ elif la_ == 3:
+ self.state = 1011
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1010
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1013
+ self.match(LcypherParser.T__15)
+ self.state = 1015
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1014
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1017
+ self.oC_PowerOfExpression()
+ pass
+
+
+ self.state = 1022
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,174,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PowerOfExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_UnaryAddOrSubtractExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_UnaryAddOrSubtractExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_UnaryAddOrSubtractExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PowerOfExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PowerOfExpression" ):
+ listener.enterOC_PowerOfExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PowerOfExpression" ):
+ listener.exitOC_PowerOfExpression(self)
+
+
+
+
+ def oC_PowerOfExpression(self):
+
+ localctx = LcypherParser.OC_PowerOfExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 116, self.RULE_oC_PowerOfExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1023
+ self.oC_UnaryAddOrSubtractExpression()
+ self.state = 1034
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,177,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 1025
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1024
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1027
+ self.match(LcypherParser.T__16)
+ self.state = 1029
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1028
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1031
+ self.oC_UnaryAddOrSubtractExpression()
+ self.state = 1036
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,177,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_UnaryAddOrSubtractExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_StringListNullOperatorExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_StringListNullOperatorExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_UnaryAddOrSubtractExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_UnaryAddOrSubtractExpression" ):
+ listener.enterOC_UnaryAddOrSubtractExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_UnaryAddOrSubtractExpression" ):
+ listener.exitOC_UnaryAddOrSubtractExpression(self)
+
+
+
+
+ def oC_UnaryAddOrSubtractExpression(self):
+
+ localctx = LcypherParser.OC_UnaryAddOrSubtractExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 118, self.RULE_oC_UnaryAddOrSubtractExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1043
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==13 or _la==14:
+ self.state = 1037
+ _la = self._input.LA(1)
+ if not(_la==13 or _la==14):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ self.state = 1039
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1038
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1045
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+ self.state = 1046
+ self.oC_StringListNullOperatorExpression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_StringListNullOperatorExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PropertyOrLabelsExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyOrLabelsExpressionContext,0)
+
+
+ def oC_StringOperatorExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_StringOperatorExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_StringOperatorExpressionContext,i)
+
+
+ def oC_ListOperatorExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ListOperatorExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ListOperatorExpressionContext,i)
+
+
+ def oC_NullOperatorExpression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_NullOperatorExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_NullOperatorExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_StringListNullOperatorExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_StringListNullOperatorExpression" ):
+ listener.enterOC_StringListNullOperatorExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_StringListNullOperatorExpression" ):
+ listener.exitOC_StringListNullOperatorExpression(self)
+
+
+
+
+ def oC_StringListNullOperatorExpression(self):
+
+ localctx = LcypherParser.OC_StringListNullOperatorExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 120, self.RULE_oC_StringListNullOperatorExpression)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1048
+ self.oC_PropertyOrLabelsExpression()
+ self.state = 1054
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,181,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 1052
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,180,self._ctx)
+ if la_ == 1:
+ self.state = 1049
+ self.oC_StringOperatorExpression()
+ pass
+
+ elif la_ == 2:
+ self.state = 1050
+ self.oC_ListOperatorExpression()
+ pass
+
+ elif la_ == 3:
+ self.state = 1051
+ self.oC_NullOperatorExpression()
+ pass
+
+
+ self.state = 1056
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,181,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ListOperatorExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def IN(self):
+ return self.getToken(LcypherParser.IN, 0)
+
+ def oC_PropertyOrLabelsExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyOrLabelsExpressionContext,0)
+
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ListOperatorExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ListOperatorExpression" ):
+ listener.enterOC_ListOperatorExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ListOperatorExpression" ):
+ listener.exitOC_ListOperatorExpression(self)
+
+
+
+
+ def oC_ListOperatorExpression(self):
+
+ localctx = LcypherParser.OC_ListOperatorExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 122, self.RULE_oC_ListOperatorExpression)
+ self._la = 0 # Token type
+ try:
+ self.state = 1082
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,187,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1057
+ self.match(LcypherParser.SP)
+ self.state = 1058
+ self.match(LcypherParser.IN)
+ self.state = 1060
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1059
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1062
+ self.oC_PropertyOrLabelsExpression()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1064
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1063
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1066
+ self.match(LcypherParser.T__7)
+ self.state = 1067
+ self.oC_Expression()
+ self.state = 1068
+ self.match(LcypherParser.T__8)
+ pass
+
+ elif la_ == 3:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 1071
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1070
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1073
+ self.match(LcypherParser.T__7)
+ self.state = 1075
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if (((_la) & ~0x3f) == 0 and ((1 << _la) & 562950037332288) != 0) or ((((_la - 81)) & ~0x3f) == 0 and ((1 << (_la - 81)) & 563083161436033) != 0):
+ self.state = 1074
+ self.oC_Expression()
+
+
+ self.state = 1077
+ self.match(LcypherParser.T__11)
+ self.state = 1079
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if (((_la) & ~0x3f) == 0 and ((1 << _la) & 562950037332288) != 0) or ((((_la - 81)) & ~0x3f) == 0 and ((1 << (_la - 81)) & 563083161436033) != 0):
+ self.state = 1078
+ self.oC_Expression()
+
+
+ self.state = 1081
+ self.match(LcypherParser.T__8)
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_StringOperatorExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PropertyOrLabelsExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyOrLabelsExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def STARTS(self):
+ return self.getToken(LcypherParser.STARTS, 0)
+
+ def WITH(self):
+ return self.getToken(LcypherParser.WITH, 0)
+
+ def ENDS(self):
+ return self.getToken(LcypherParser.ENDS, 0)
+
+ def CONTAINS(self):
+ return self.getToken(LcypherParser.CONTAINS, 0)
+
+ def REGEXP(self):
+ return self.getToken(LcypherParser.REGEXP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_StringOperatorExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_StringOperatorExpression" ):
+ listener.enterOC_StringOperatorExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_StringOperatorExpression" ):
+ listener.exitOC_StringOperatorExpression(self)
+
+
+
+
+ def oC_StringOperatorExpression(self):
+
+ localctx = LcypherParser.OC_StringOperatorExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 124, self.RULE_oC_StringOperatorExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1096
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,188,self._ctx)
+ if la_ == 1:
+ self.state = 1084
+ self.match(LcypherParser.SP)
+ self.state = 1085
+ self.match(LcypherParser.STARTS)
+ self.state = 1086
+ self.match(LcypherParser.SP)
+ self.state = 1087
+ self.match(LcypherParser.WITH)
+ pass
+
+ elif la_ == 2:
+ self.state = 1088
+ self.match(LcypherParser.SP)
+ self.state = 1089
+ self.match(LcypherParser.ENDS)
+ self.state = 1090
+ self.match(LcypherParser.SP)
+ self.state = 1091
+ self.match(LcypherParser.WITH)
+ pass
+
+ elif la_ == 3:
+ self.state = 1092
+ self.match(LcypherParser.SP)
+ self.state = 1093
+ self.match(LcypherParser.CONTAINS)
+ pass
+
+ elif la_ == 4:
+ self.state = 1094
+ self.match(LcypherParser.SP)
+ self.state = 1095
+ self.match(LcypherParser.REGEXP)
+ pass
+
+
+ self.state = 1099
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1098
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1101
+ self.oC_PropertyOrLabelsExpression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NullOperatorExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def IS(self):
+ return self.getToken(LcypherParser.IS, 0)
+
+ def NULL_(self):
+ return self.getToken(LcypherParser.NULL_, 0)
+
+ def NOT(self):
+ return self.getToken(LcypherParser.NOT, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NullOperatorExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NullOperatorExpression" ):
+ listener.enterOC_NullOperatorExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NullOperatorExpression" ):
+ listener.exitOC_NullOperatorExpression(self)
+
+
+
+
+ def oC_NullOperatorExpression(self):
+
+ localctx = LcypherParser.OC_NullOperatorExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 126, self.RULE_oC_NullOperatorExpression)
+ try:
+ self.state = 1113
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,190,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1103
+ self.match(LcypherParser.SP)
+ self.state = 1104
+ self.match(LcypherParser.IS)
+ self.state = 1105
+ self.match(LcypherParser.SP)
+ self.state = 1106
+ self.match(LcypherParser.NULL_)
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1107
+ self.match(LcypherParser.SP)
+ self.state = 1108
+ self.match(LcypherParser.IS)
+ self.state = 1109
+ self.match(LcypherParser.SP)
+ self.state = 1110
+ self.match(LcypherParser.NOT)
+ self.state = 1111
+ self.match(LcypherParser.SP)
+ self.state = 1112
+ self.match(LcypherParser.NULL_)
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PropertyOrLabelsExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Atom(self):
+ return self.getTypedRuleContext(LcypherParser.OC_AtomContext,0)
+
+
+ def oC_PropertyLookup(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PropertyLookupContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyLookupContext,i)
+
+
+ def oC_NodeLabels(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodeLabelsContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PropertyOrLabelsExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PropertyOrLabelsExpression" ):
+ listener.enterOC_PropertyOrLabelsExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PropertyOrLabelsExpression" ):
+ listener.exitOC_PropertyOrLabelsExpression(self)
+
+
+
+
+ def oC_PropertyOrLabelsExpression(self):
+
+ localctx = LcypherParser.OC_PropertyOrLabelsExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 128, self.RULE_oC_PropertyOrLabelsExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1115
+ self.oC_Atom()
+ self.state = 1122
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,192,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 1117
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1116
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1119
+ self.oC_PropertyLookup()
+ self.state = 1124
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,192,self._ctx)
+
+ self.state = 1129
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,194,self._ctx)
+ if la_ == 1:
+ self.state = 1126
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1125
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1128
+ self.oC_NodeLabels()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_AtomContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Literal(self):
+ return self.getTypedRuleContext(LcypherParser.OC_LiteralContext,0)
+
+
+ def oC_Parameter(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ParameterContext,0)
+
+
+ def oC_CaseExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_CaseExpressionContext,0)
+
+
+ def COUNT(self):
+ return self.getToken(LcypherParser.COUNT, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_ListComprehension(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ListComprehensionContext,0)
+
+
+ def oC_PatternComprehension(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PatternComprehensionContext,0)
+
+
+ def ALL(self):
+ return self.getToken(LcypherParser.ALL, 0)
+
+ def oC_FilterExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_FilterExpressionContext,0)
+
+
+ def ANY(self):
+ return self.getToken(LcypherParser.ANY, 0)
+
+ def NONE(self):
+ return self.getToken(LcypherParser.NONE, 0)
+
+ def SINGLE(self):
+ return self.getToken(LcypherParser.SINGLE, 0)
+
+ def oC_RelationshipsPattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RelationshipsPatternContext,0)
+
+
+ def oC_ParenthesizedExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ParenthesizedExpressionContext,0)
+
+
+ def oC_FunctionInvocation(self):
+ return self.getTypedRuleContext(LcypherParser.OC_FunctionInvocationContext,0)
+
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Atom
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Atom" ):
+ listener.enterOC_Atom(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Atom" ):
+ listener.exitOC_Atom(self)
+
+
+
+
+ def oC_Atom(self):
+
+ localctx = LcypherParser.OC_AtomContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 130, self.RULE_oC_Atom)
+ self._la = 0 # Token type
+ try:
+ self.state = 1209
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,210,self._ctx)
+ if la_ == 1:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1131
+ self.oC_Literal()
+ pass
+
+ elif la_ == 2:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1132
+ self.oC_Parameter()
+ pass
+
+ elif la_ == 3:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 1133
+ self.oC_CaseExpression()
+ pass
+
+ elif la_ == 4:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 1134
+ self.match(LcypherParser.COUNT)
+ self.state = 1136
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1135
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1138
+ self.match(LcypherParser.T__5)
+ self.state = 1140
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1139
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1142
+ self.match(LcypherParser.T__4)
+ self.state = 1144
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1143
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1146
+ self.match(LcypherParser.T__6)
+ pass
+
+ elif la_ == 5:
+ self.enterOuterAlt(localctx, 5)
+ self.state = 1147
+ self.oC_ListComprehension()
+ pass
+
+ elif la_ == 6:
+ self.enterOuterAlt(localctx, 6)
+ self.state = 1148
+ self.oC_PatternComprehension()
+ pass
+
+ elif la_ == 7:
+ self.enterOuterAlt(localctx, 7)
+ self.state = 1149
+ self.match(LcypherParser.ALL)
+ self.state = 1151
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1150
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1153
+ self.match(LcypherParser.T__5)
+ self.state = 1155
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1154
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1157
+ self.oC_FilterExpression()
+ self.state = 1159
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1158
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1161
+ self.match(LcypherParser.T__6)
+ pass
+
+ elif la_ == 8:
+ self.enterOuterAlt(localctx, 8)
+ self.state = 1163
+ self.match(LcypherParser.ANY)
+ self.state = 1165
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1164
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1167
+ self.match(LcypherParser.T__5)
+ self.state = 1169
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1168
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1171
+ self.oC_FilterExpression()
+ self.state = 1173
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1172
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1175
+ self.match(LcypherParser.T__6)
+ pass
+
+ elif la_ == 9:
+ self.enterOuterAlt(localctx, 9)
+ self.state = 1177
+ self.match(LcypherParser.NONE)
+ self.state = 1179
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1178
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1181
+ self.match(LcypherParser.T__5)
+ self.state = 1183
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1182
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1185
+ self.oC_FilterExpression()
+ self.state = 1187
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1186
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1189
+ self.match(LcypherParser.T__6)
+ pass
+
+ elif la_ == 10:
+ self.enterOuterAlt(localctx, 10)
+ self.state = 1191
+ self.match(LcypherParser.SINGLE)
+ self.state = 1193
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1192
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1195
+ self.match(LcypherParser.T__5)
+ self.state = 1197
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1196
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1199
+ self.oC_FilterExpression()
+ self.state = 1201
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1200
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1203
+ self.match(LcypherParser.T__6)
+ pass
+
+ elif la_ == 11:
+ self.enterOuterAlt(localctx, 11)
+ self.state = 1205
+ self.oC_RelationshipsPattern()
+ pass
+
+ elif la_ == 12:
+ self.enterOuterAlt(localctx, 12)
+ self.state = 1206
+ self.oC_ParenthesizedExpression()
+ pass
+
+ elif la_ == 13:
+ self.enterOuterAlt(localctx, 13)
+ self.state = 1207
+ self.oC_FunctionInvocation()
+ pass
+
+ elif la_ == 14:
+ self.enterOuterAlt(localctx, 14)
+ self.state = 1208
+ self.oC_Variable()
+ pass
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_LiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_NumberLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NumberLiteralContext,0)
+
+
+ def StringLiteral(self):
+ return self.getToken(LcypherParser.StringLiteral, 0)
+
+ def oC_BooleanLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_BooleanLiteralContext,0)
+
+
+ def NULL_(self):
+ return self.getToken(LcypherParser.NULL_, 0)
+
+ def oC_MapLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_MapLiteralContext,0)
+
+
+ def oC_ListLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ListLiteralContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Literal
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Literal" ):
+ listener.enterOC_Literal(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Literal" ):
+ listener.exitOC_Literal(self)
+
+
+
+
+ def oC_Literal(self):
+
+ localctx = LcypherParser.OC_LiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 132, self.RULE_oC_Literal)
+ try:
+ self.state = 1217
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [103, 104, 105, 113, 114]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1211
+ self.oC_NumberLiteral()
+ pass
+ elif token in [101]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1212
+ self.match(LcypherParser.StringLiteral)
+ pass
+ elif token in [93, 94]:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 1213
+ self.oC_BooleanLiteral()
+ pass
+ elif token in [88]:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 1214
+ self.match(LcypherParser.NULL_)
+ pass
+ elif token in [24]:
+ self.enterOuterAlt(localctx, 5)
+ self.state = 1215
+ self.oC_MapLiteral()
+ pass
+ elif token in [8]:
+ self.enterOuterAlt(localctx, 6)
+ self.state = 1216
+ self.oC_ListLiteral()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_BooleanLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def TRUE_(self):
+ return self.getToken(LcypherParser.TRUE_, 0)
+
+ def FALSE_(self):
+ return self.getToken(LcypherParser.FALSE_, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_BooleanLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_BooleanLiteral" ):
+ listener.enterOC_BooleanLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_BooleanLiteral" ):
+ listener.exitOC_BooleanLiteral(self)
+
+
+
+
+ def oC_BooleanLiteral(self):
+
+ localctx = LcypherParser.OC_BooleanLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 134, self.RULE_oC_BooleanLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1219
+ _la = self._input.LA(1)
+ if not(_la==93 or _la==94):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ListLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ListLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ListLiteral" ):
+ listener.enterOC_ListLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ListLiteral" ):
+ listener.exitOC_ListLiteral(self)
+
+
+
+
+ def oC_ListLiteral(self):
+
+ localctx = LcypherParser.OC_ListLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 136, self.RULE_oC_ListLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1221
+ self.match(LcypherParser.T__7)
+ self.state = 1223
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1222
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1242
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if (((_la) & ~0x3f) == 0 and ((1 << _la) & 562950037332288) != 0) or ((((_la - 81)) & ~0x3f) == 0 and ((1 << (_la - 81)) & 563083161436033) != 0):
+ self.state = 1225
+ self.oC_Expression()
+ self.state = 1227
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1226
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1239
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==2:
+ self.state = 1229
+ self.match(LcypherParser.T__1)
+ self.state = 1231
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1230
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1233
+ self.oC_Expression()
+ self.state = 1235
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1234
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1241
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+
+
+ self.state = 1244
+ self.match(LcypherParser.T__8)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PartialComparisonExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_AddOrSubtractExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_AddOrSubtractExpressionContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PartialComparisonExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PartialComparisonExpression" ):
+ listener.enterOC_PartialComparisonExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PartialComparisonExpression" ):
+ listener.exitOC_PartialComparisonExpression(self)
+
+
+
+
+ def oC_PartialComparisonExpression(self):
+
+ localctx = LcypherParser.OC_PartialComparisonExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 138, self.RULE_oC_PartialComparisonExpression)
+ self._la = 0 # Token type
+ try:
+ self.state = 1276
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [3]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1246
+ self.match(LcypherParser.T__2)
+ self.state = 1248
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1247
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1250
+ self.oC_AddOrSubtractExpression()
+ pass
+ elif token in [18]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1251
+ self.match(LcypherParser.T__17)
+ self.state = 1253
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1252
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1255
+ self.oC_AddOrSubtractExpression()
+ pass
+ elif token in [19]:
+ self.enterOuterAlt(localctx, 3)
+ self.state = 1256
+ self.match(LcypherParser.T__18)
+ self.state = 1258
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1257
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1260
+ self.oC_AddOrSubtractExpression()
+ pass
+ elif token in [20]:
+ self.enterOuterAlt(localctx, 4)
+ self.state = 1261
+ self.match(LcypherParser.T__19)
+ self.state = 1263
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1262
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1265
+ self.oC_AddOrSubtractExpression()
+ pass
+ elif token in [21]:
+ self.enterOuterAlt(localctx, 5)
+ self.state = 1266
+ self.match(LcypherParser.T__20)
+ self.state = 1268
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1267
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1270
+ self.oC_AddOrSubtractExpression()
+ pass
+ elif token in [22]:
+ self.enterOuterAlt(localctx, 6)
+ self.state = 1271
+ self.match(LcypherParser.T__21)
+ self.state = 1273
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1272
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1275
+ self.oC_AddOrSubtractExpression()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ParenthesizedExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ParenthesizedExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ParenthesizedExpression" ):
+ listener.enterOC_ParenthesizedExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ParenthesizedExpression" ):
+ listener.exitOC_ParenthesizedExpression(self)
+
+
+
+
+ def oC_ParenthesizedExpression(self):
+
+ localctx = LcypherParser.OC_ParenthesizedExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 140, self.RULE_oC_ParenthesizedExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1278
+ self.match(LcypherParser.T__5)
+ self.state = 1280
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1279
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1282
+ self.oC_Expression()
+ self.state = 1284
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1283
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1286
+ self.match(LcypherParser.T__6)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RelationshipsPatternContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_NodePattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NodePatternContext,0)
+
+
+ def oC_PatternElementChain(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PatternElementChainContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PatternElementChainContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RelationshipsPattern
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RelationshipsPattern" ):
+ listener.enterOC_RelationshipsPattern(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RelationshipsPattern" ):
+ listener.exitOC_RelationshipsPattern(self)
+
+
+
+
+ def oC_RelationshipsPattern(self):
+
+ localctx = LcypherParser.OC_RelationshipsPatternContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 142, self.RULE_oC_RelationshipsPattern)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1288
+ self.oC_NodePattern()
+ self.state = 1293
+ self._errHandler.sync(self)
+ _alt = 1
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt == 1:
+ self.state = 1290
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1289
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1292
+ self.oC_PatternElementChain()
+
+ else:
+ raise NoViableAltException(self)
+ self.state = 1295
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,228,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_FilterExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_IdInColl(self):
+ return self.getTypedRuleContext(LcypherParser.OC_IdInCollContext,0)
+
+
+ def oC_Where(self):
+ return self.getTypedRuleContext(LcypherParser.OC_WhereContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_FilterExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_FilterExpression" ):
+ listener.enterOC_FilterExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_FilterExpression" ):
+ listener.exitOC_FilterExpression(self)
+
+
+
+
+ def oC_FilterExpression(self):
+
+ localctx = LcypherParser.OC_FilterExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 144, self.RULE_oC_FilterExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1297
+ self.oC_IdInColl()
+ self.state = 1302
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,230,self._ctx)
+ if la_ == 1:
+ self.state = 1299
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1298
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1301
+ self.oC_Where()
+
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_IdInCollContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def IN(self):
+ return self.getToken(LcypherParser.IN, 0)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_IdInColl
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_IdInColl" ):
+ listener.enterOC_IdInColl(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_IdInColl" ):
+ listener.exitOC_IdInColl(self)
+
+
+
+
+ def oC_IdInColl(self):
+
+ localctx = LcypherParser.OC_IdInCollContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 146, self.RULE_oC_IdInColl)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1304
+ self.oC_Variable()
+ self.state = 1305
+ self.match(LcypherParser.SP)
+ self.state = 1306
+ self.match(LcypherParser.IN)
+ self.state = 1307
+ self.match(LcypherParser.SP)
+ self.state = 1308
+ self.oC_Expression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_FunctionInvocationContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_FunctionName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_FunctionNameContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def DISTINCT(self):
+ return self.getToken(LcypherParser.DISTINCT, 0)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_FunctionInvocation
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_FunctionInvocation" ):
+ listener.enterOC_FunctionInvocation(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_FunctionInvocation" ):
+ listener.exitOC_FunctionInvocation(self)
+
+
+
+
+ def oC_FunctionInvocation(self):
+
+ localctx = LcypherParser.OC_FunctionInvocationContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 148, self.RULE_oC_FunctionInvocation)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1310
+ self.oC_FunctionName()
+ self.state = 1312
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1311
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1314
+ self.match(LcypherParser.T__5)
+ self.state = 1316
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1315
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1322
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==64:
+ self.state = 1318
+ self.match(LcypherParser.DISTINCT)
+ self.state = 1320
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1319
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 1341
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if (((_la) & ~0x3f) == 0 and ((1 << _la) & 562950037332288) != 0) or ((((_la - 81)) & ~0x3f) == 0 and ((1 << (_la - 81)) & 563083161436033) != 0):
+ self.state = 1324
+ self.oC_Expression()
+ self.state = 1326
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1325
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1338
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==2:
+ self.state = 1328
+ self.match(LcypherParser.T__1)
+ self.state = 1330
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1329
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1332
+ self.oC_Expression()
+ self.state = 1334
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1333
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1340
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+
+
+ self.state = 1343
+ self.match(LcypherParser.T__6)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_FunctionNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Namespace(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NamespaceContext,0)
+
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def EXISTS(self):
+ return self.getToken(LcypherParser.EXISTS, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_FunctionName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_FunctionName" ):
+ listener.enterOC_FunctionName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_FunctionName" ):
+ listener.exitOC_FunctionName(self)
+
+
+
+
+ def oC_FunctionName(self):
+
+ localctx = LcypherParser.OC_FunctionNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 150, self.RULE_oC_FunctionName)
+ try:
+ self.state = 1349
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [89, 90, 91, 92, 106, 115, 116, 117, 130]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1345
+ self.oC_Namespace()
+ self.state = 1346
+ self.oC_SymbolicName()
+ pass
+ elif token in [95]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1348
+ self.match(LcypherParser.EXISTS)
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ExplicitProcedureInvocationContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_ProcedureName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ProcedureNameContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ExplicitProcedureInvocation
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ExplicitProcedureInvocation" ):
+ listener.enterOC_ExplicitProcedureInvocation(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ExplicitProcedureInvocation" ):
+ listener.exitOC_ExplicitProcedureInvocation(self)
+
+
+
+
+ def oC_ExplicitProcedureInvocation(self):
+
+ localctx = LcypherParser.OC_ExplicitProcedureInvocationContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 152, self.RULE_oC_ExplicitProcedureInvocation)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1351
+ self.oC_ProcedureName()
+ self.state = 1353
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1352
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1355
+ self.match(LcypherParser.T__5)
+ self.state = 1357
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1356
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1376
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if (((_la) & ~0x3f) == 0 and ((1 << _la) & 562950037332288) != 0) or ((((_la - 81)) & ~0x3f) == 0 and ((1 << (_la - 81)) & 563083161436033) != 0):
+ self.state = 1359
+ self.oC_Expression()
+ self.state = 1361
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1360
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1373
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==2:
+ self.state = 1363
+ self.match(LcypherParser.T__1)
+ self.state = 1365
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1364
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1367
+ self.oC_Expression()
+ self.state = 1369
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1368
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1375
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+
+
+ self.state = 1378
+ self.match(LcypherParser.T__6)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ImplicitProcedureInvocationContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_ProcedureName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ProcedureNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ImplicitProcedureInvocation
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ImplicitProcedureInvocation" ):
+ listener.enterOC_ImplicitProcedureInvocation(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ImplicitProcedureInvocation" ):
+ listener.exitOC_ImplicitProcedureInvocation(self)
+
+
+
+
+ def oC_ImplicitProcedureInvocation(self):
+
+ localctx = LcypherParser.OC_ImplicitProcedureInvocationContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 154, self.RULE_oC_ImplicitProcedureInvocation)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1380
+ self.oC_ProcedureName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ProcedureResultFieldContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ProcedureResultField
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ProcedureResultField" ):
+ listener.enterOC_ProcedureResultField(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ProcedureResultField" ):
+ listener.exitOC_ProcedureResultField(self)
+
+
+
+
+ def oC_ProcedureResultField(self):
+
+ localctx = LcypherParser.OC_ProcedureResultFieldContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 156, self.RULE_oC_ProcedureResultField)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1382
+ self.oC_SymbolicName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ProcedureNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Namespace(self):
+ return self.getTypedRuleContext(LcypherParser.OC_NamespaceContext,0)
+
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ProcedureName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ProcedureName" ):
+ listener.enterOC_ProcedureName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ProcedureName" ):
+ listener.exitOC_ProcedureName(self)
+
+
+
+
+ def oC_ProcedureName(self):
+
+ localctx = LcypherParser.OC_ProcedureNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 158, self.RULE_oC_ProcedureName)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1384
+ self.oC_Namespace()
+ self.state = 1385
+ self.oC_SymbolicName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NamespaceContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SymbolicName(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_SymbolicNameContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Namespace
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Namespace" ):
+ listener.enterOC_Namespace(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Namespace" ):
+ listener.exitOC_Namespace(self)
+
+
+
+
+ def oC_Namespace(self):
+
+ localctx = LcypherParser.OC_NamespaceContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 160, self.RULE_oC_Namespace)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1392
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,248,self._ctx)
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt==1:
+ self.state = 1387
+ self.oC_SymbolicName()
+ self.state = 1388
+ self.match(LcypherParser.T__22)
+ self.state = 1394
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,248,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ListComprehensionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_FilterExpression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_FilterExpressionContext,0)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Expression(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ListComprehension
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ListComprehension" ):
+ listener.enterOC_ListComprehension(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ListComprehension" ):
+ listener.exitOC_ListComprehension(self)
+
+
+
+
+ def oC_ListComprehension(self):
+
+ localctx = LcypherParser.OC_ListComprehensionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 162, self.RULE_oC_ListComprehension)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1395
+ self.match(LcypherParser.T__7)
+ self.state = 1397
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1396
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1399
+ self.oC_FilterExpression()
+ self.state = 1408
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,252,self._ctx)
+ if la_ == 1:
+ self.state = 1401
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1400
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1403
+ self.match(LcypherParser.T__10)
+ self.state = 1405
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1404
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1407
+ self.oC_Expression()
+
+
+ self.state = 1411
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1410
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1413
+ self.match(LcypherParser.T__8)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PatternComprehensionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_RelationshipsPattern(self):
+ return self.getTypedRuleContext(LcypherParser.OC_RelationshipsPatternContext,0)
+
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_Variable(self):
+ return self.getTypedRuleContext(LcypherParser.OC_VariableContext,0)
+
+
+ def WHERE(self):
+ return self.getToken(LcypherParser.WHERE, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PatternComprehension
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PatternComprehension" ):
+ listener.enterOC_PatternComprehension(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PatternComprehension" ):
+ listener.exitOC_PatternComprehension(self)
+
+
+
+
+ def oC_PatternComprehension(self):
+
+ localctx = LcypherParser.OC_PatternComprehensionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 164, self.RULE_oC_PatternComprehension)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1415
+ self.match(LcypherParser.T__7)
+ self.state = 1417
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1416
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1427
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 89)) & ~0x3f) == 0 and ((1 << (_la - 89)) & 2199493148687) != 0):
+ self.state = 1419
+ self.oC_Variable()
+ self.state = 1421
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1420
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1423
+ self.match(LcypherParser.T__2)
+ self.state = 1425
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1424
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 1429
+ self.oC_RelationshipsPattern()
+ self.state = 1431
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1430
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1441
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==77:
+ self.state = 1433
+ self.match(LcypherParser.WHERE)
+ self.state = 1435
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1434
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1437
+ self.oC_Expression()
+ self.state = 1439
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1438
+ self.match(LcypherParser.SP)
+
+
+
+
+ self.state = 1443
+ self.match(LcypherParser.T__10)
+ self.state = 1445
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1444
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1447
+ self.oC_Expression()
+ self.state = 1449
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1448
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1451
+ self.match(LcypherParser.T__8)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PropertyLookupContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_PropertyKeyName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyKeyNameContext,0)
+
+
+ def SP(self):
+ return self.getToken(LcypherParser.SP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PropertyLookup
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PropertyLookup" ):
+ listener.enterOC_PropertyLookup(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PropertyLookup" ):
+ listener.exitOC_PropertyLookup(self)
+
+
+
+
+ def oC_PropertyLookup(self):
+
+ localctx = LcypherParser.OC_PropertyLookupContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 166, self.RULE_oC_PropertyLookup)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1453
+ self.match(LcypherParser.T__22)
+ self.state = 1455
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1454
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1457
+ self.oC_PropertyKeyName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_CaseExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def END(self):
+ return self.getToken(LcypherParser.END, 0)
+
+ def ELSE(self):
+ return self.getToken(LcypherParser.ELSE, 0)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def CASE(self):
+ return self.getToken(LcypherParser.CASE, 0)
+
+ def oC_CaseAlternatives(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_CaseAlternativesContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_CaseAlternativesContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_CaseExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_CaseExpression" ):
+ listener.enterOC_CaseExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_CaseExpression" ):
+ listener.exitOC_CaseExpression(self)
+
+
+
+
+ def oC_CaseExpression(self):
+
+ localctx = LcypherParser.OC_CaseExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 168, self.RULE_oC_CaseExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1481
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,270,self._ctx)
+ if la_ == 1:
+ self.state = 1459
+ self.match(LcypherParser.CASE)
+ self.state = 1464
+ self._errHandler.sync(self)
+ _alt = 1
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt == 1:
+ self.state = 1461
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1460
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1463
+ self.oC_CaseAlternatives()
+
+ else:
+ raise NoViableAltException(self)
+ self.state = 1466
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,266,self._ctx)
+
+ pass
+
+ elif la_ == 2:
+ self.state = 1468
+ self.match(LcypherParser.CASE)
+ self.state = 1470
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1469
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1472
+ self.oC_Expression()
+ self.state = 1477
+ self._errHandler.sync(self)
+ _alt = 1
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt == 1:
+ self.state = 1474
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1473
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1476
+ self.oC_CaseAlternatives()
+
+ else:
+ raise NoViableAltException(self)
+ self.state = 1479
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,269,self._ctx)
+
+ pass
+
+
+ self.state = 1491
+ self._errHandler.sync(self)
+ la_ = self._interp.adaptivePredict(self._input,273,self._ctx)
+ if la_ == 1:
+ self.state = 1484
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1483
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1486
+ self.match(LcypherParser.ELSE)
+ self.state = 1488
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1487
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1490
+ self.oC_Expression()
+
+
+ self.state = 1494
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1493
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1496
+ self.match(LcypherParser.END)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_CaseAlternativesContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def WHEN(self):
+ return self.getToken(LcypherParser.WHEN, 0)
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def THEN(self):
+ return self.getToken(LcypherParser.THEN, 0)
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_CaseAlternatives
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_CaseAlternatives" ):
+ listener.enterOC_CaseAlternatives(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_CaseAlternatives" ):
+ listener.exitOC_CaseAlternatives(self)
+
+
+
+
+ def oC_CaseAlternatives(self):
+
+ localctx = LcypherParser.OC_CaseAlternativesContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 170, self.RULE_oC_CaseAlternatives)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1498
+ self.match(LcypherParser.WHEN)
+ self.state = 1500
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1499
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1502
+ self.oC_Expression()
+ self.state = 1504
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1503
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1506
+ self.match(LcypherParser.THEN)
+ self.state = 1508
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1507
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1510
+ self.oC_Expression()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_VariableContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Variable
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Variable" ):
+ listener.enterOC_Variable(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Variable" ):
+ listener.exitOC_Variable(self)
+
+
+
+
+ def oC_Variable(self):
+
+ localctx = LcypherParser.OC_VariableContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 172, self.RULE_oC_Variable)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1512
+ self.oC_SymbolicName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_NumberLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_DoubleLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_DoubleLiteralContext,0)
+
+
+ def oC_IntegerLiteral(self):
+ return self.getTypedRuleContext(LcypherParser.OC_IntegerLiteralContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_NumberLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_NumberLiteral" ):
+ listener.enterOC_NumberLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_NumberLiteral" ):
+ listener.exitOC_NumberLiteral(self)
+
+
+
+
+ def oC_NumberLiteral(self):
+
+ localctx = LcypherParser.OC_NumberLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 174, self.RULE_oC_NumberLiteral)
+ try:
+ self.state = 1516
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [113, 114]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1514
+ self.oC_DoubleLiteral()
+ pass
+ elif token in [103, 104, 105]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1515
+ self.oC_IntegerLiteral()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_MapLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def oC_PropertyKeyName(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PropertyKeyNameContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyKeyNameContext,i)
+
+
+ def oC_Expression(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_ExpressionContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_ExpressionContext,i)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_MapLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_MapLiteral" ):
+ listener.enterOC_MapLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_MapLiteral" ):
+ listener.exitOC_MapLiteral(self)
+
+
+
+
+ def oC_MapLiteral(self):
+
+ localctx = LcypherParser.OC_MapLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 176, self.RULE_oC_MapLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1518
+ self.match(LcypherParser.T__23)
+ self.state = 1520
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1519
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1555
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if ((((_la - 48)) & ~0x3f) == 0 and ((1 << (_la - 48)) & 297237300058759167) != 0) or ((((_la - 115)) & ~0x3f) == 0 and ((1 << (_la - 115)) & 40959) != 0):
+ self.state = 1522
+ self.oC_PropertyKeyName()
+ self.state = 1524
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1523
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1526
+ self.match(LcypherParser.T__9)
+ self.state = 1528
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1527
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1530
+ self.oC_Expression()
+ self.state = 1532
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1531
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1552
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ while _la==2:
+ self.state = 1534
+ self.match(LcypherParser.T__1)
+ self.state = 1536
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1535
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1538
+ self.oC_PropertyKeyName()
+ self.state = 1540
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1539
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1542
+ self.match(LcypherParser.T__9)
+ self.state = 1544
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1543
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1546
+ self.oC_Expression()
+ self.state = 1548
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1547
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1554
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+
+
+
+ self.state = 1557
+ self.match(LcypherParser.T__24)
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ParameterContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def DecimalInteger(self):
+ return self.getToken(LcypherParser.DecimalInteger, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Parameter
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Parameter" ):
+ listener.enterOC_Parameter(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Parameter" ):
+ listener.exitOC_Parameter(self)
+
+
+
+
+ def oC_Parameter(self):
+
+ localctx = LcypherParser.OC_ParameterContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 178, self.RULE_oC_Parameter)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1559
+ self.match(LcypherParser.T__25)
+ self.state = 1562
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [89, 90, 91, 92, 106, 115, 116, 117, 130]:
+ self.state = 1560
+ self.oC_SymbolicName()
+ pass
+ elif token in [104]:
+ self.state = 1561
+ self.match(LcypherParser.DecimalInteger)
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PropertyExpressionContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_Atom(self):
+ return self.getTypedRuleContext(LcypherParser.OC_AtomContext,0)
+
+
+ def oC_PropertyLookup(self, i:int=None):
+ if i is None:
+ return self.getTypedRuleContexts(LcypherParser.OC_PropertyLookupContext)
+ else:
+ return self.getTypedRuleContext(LcypherParser.OC_PropertyLookupContext,i)
+
+
+ def SP(self, i:int=None):
+ if i is None:
+ return self.getTokens(LcypherParser.SP)
+ else:
+ return self.getToken(LcypherParser.SP, i)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PropertyExpression
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PropertyExpression" ):
+ listener.enterOC_PropertyExpression(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PropertyExpression" ):
+ listener.exitOC_PropertyExpression(self)
+
+
+
+
+ def oC_PropertyExpression(self):
+
+ localctx = LcypherParser.OC_PropertyExpressionContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 180, self.RULE_oC_PropertyExpression)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1564
+ self.oC_Atom()
+ self.state = 1569
+ self._errHandler.sync(self)
+ _alt = 1
+ while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+ if _alt == 1:
+ self.state = 1566
+ self._errHandler.sync(self)
+ _la = self._input.LA(1)
+ if _la==131:
+ self.state = 1565
+ self.match(LcypherParser.SP)
+
+
+ self.state = 1568
+ self.oC_PropertyLookup()
+
+ else:
+ raise NoViableAltException(self)
+ self.state = 1571
+ self._errHandler.sync(self)
+ _alt = self._interp.adaptivePredict(self._input,291,self._ctx)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_PropertyKeyNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SchemaName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SchemaNameContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_PropertyKeyName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_PropertyKeyName" ):
+ listener.enterOC_PropertyKeyName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_PropertyKeyName" ):
+ listener.exitOC_PropertyKeyName(self)
+
+
+
+
+ def oC_PropertyKeyName(self):
+
+ localctx = LcypherParser.OC_PropertyKeyNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 182, self.RULE_oC_PropertyKeyName)
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1573
+ self.oC_SchemaName()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_IntegerLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def HexInteger(self):
+ return self.getToken(LcypherParser.HexInteger, 0)
+
+ def OctalInteger(self):
+ return self.getToken(LcypherParser.OctalInteger, 0)
+
+ def DecimalInteger(self):
+ return self.getToken(LcypherParser.DecimalInteger, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_IntegerLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_IntegerLiteral" ):
+ listener.enterOC_IntegerLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_IntegerLiteral" ):
+ listener.exitOC_IntegerLiteral(self)
+
+
+
+
+ def oC_IntegerLiteral(self):
+
+ localctx = LcypherParser.OC_IntegerLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 184, self.RULE_oC_IntegerLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1575
+ _la = self._input.LA(1)
+ if not(((((_la - 103)) & ~0x3f) == 0 and ((1 << (_la - 103)) & 7) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_DoubleLiteralContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def ExponentDecimalReal(self):
+ return self.getToken(LcypherParser.ExponentDecimalReal, 0)
+
+ def RegularDecimalReal(self):
+ return self.getToken(LcypherParser.RegularDecimalReal, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_DoubleLiteral
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_DoubleLiteral" ):
+ listener.enterOC_DoubleLiteral(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_DoubleLiteral" ):
+ listener.exitOC_DoubleLiteral(self)
+
+
+
+
+ def oC_DoubleLiteral(self):
+
+ localctx = LcypherParser.OC_DoubleLiteralContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 186, self.RULE_oC_DoubleLiteral)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1577
+ _la = self._input.LA(1)
+ if not(_la==113 or _la==114):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SchemaNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def oC_SymbolicName(self):
+ return self.getTypedRuleContext(LcypherParser.OC_SymbolicNameContext,0)
+
+
+ def oC_ReservedWord(self):
+ return self.getTypedRuleContext(LcypherParser.OC_ReservedWordContext,0)
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SchemaName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SchemaName" ):
+ listener.enterOC_SchemaName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SchemaName" ):
+ listener.exitOC_SchemaName(self)
+
+
+
+
+ def oC_SchemaName(self):
+
+ localctx = LcypherParser.OC_SchemaNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 188, self.RULE_oC_SchemaName)
+ try:
+ self.state = 1581
+ self._errHandler.sync(self)
+ token = self._input.LA(1)
+ if token in [89, 90, 91, 92, 106, 115, 116, 117, 130]:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1579
+ self.oC_SymbolicName()
+ pass
+ elif token in [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 77, 78, 79, 80, 81, 82, 83, 84, 85, 87, 88, 93, 94, 95, 96, 97, 98, 99, 100, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127]:
+ self.enterOuterAlt(localctx, 2)
+ self.state = 1580
+ self.oC_ReservedWord()
+ pass
+ else:
+ raise NoViableAltException(self)
+
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_SymbolicNameContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def UnescapedSymbolicName(self):
+ return self.getToken(LcypherParser.UnescapedSymbolicName, 0)
+
+ def EscapedSymbolicName(self):
+ return self.getToken(LcypherParser.EscapedSymbolicName, 0)
+
+ def HexLetter(self):
+ return self.getToken(LcypherParser.HexLetter, 0)
+
+ def COUNT(self):
+ return self.getToken(LcypherParser.COUNT, 0)
+
+ def FILTER(self):
+ return self.getToken(LcypherParser.FILTER, 0)
+
+ def EXTRACT(self):
+ return self.getToken(LcypherParser.EXTRACT, 0)
+
+ def ANY(self):
+ return self.getToken(LcypherParser.ANY, 0)
+
+ def NONE(self):
+ return self.getToken(LcypherParser.NONE, 0)
+
+ def SINGLE(self):
+ return self.getToken(LcypherParser.SINGLE, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_SymbolicName
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_SymbolicName" ):
+ listener.enterOC_SymbolicName(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_SymbolicName" ):
+ listener.exitOC_SymbolicName(self)
+
+
+
+
+ def oC_SymbolicName(self):
+
+ localctx = LcypherParser.OC_SymbolicNameContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 190, self.RULE_oC_SymbolicName)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1583
+ _la = self._input.LA(1)
+ if not(((((_la - 89)) & ~0x3f) == 0 and ((1 << (_la - 89)) & 2199493148687) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_ReservedWordContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+ def ALL(self):
+ return self.getToken(LcypherParser.ALL, 0)
+
+ def ASC(self):
+ return self.getToken(LcypherParser.ASC, 0)
+
+ def ASCENDING(self):
+ return self.getToken(LcypherParser.ASCENDING, 0)
+
+ def BY(self):
+ return self.getToken(LcypherParser.BY, 0)
+
+ def CREATE(self):
+ return self.getToken(LcypherParser.CREATE, 0)
+
+ def DELETE_(self):
+ return self.getToken(LcypherParser.DELETE_, 0)
+
+ def DESC(self):
+ return self.getToken(LcypherParser.DESC, 0)
+
+ def DESCENDING(self):
+ return self.getToken(LcypherParser.DESCENDING, 0)
+
+ def DETACH(self):
+ return self.getToken(LcypherParser.DETACH, 0)
+
+ def EXISTS(self):
+ return self.getToken(LcypherParser.EXISTS, 0)
+
+ def LIMIT(self):
+ return self.getToken(LcypherParser.LIMIT, 0)
+
+ def MATCH(self):
+ return self.getToken(LcypherParser.MATCH, 0)
+
+ def MERGE(self):
+ return self.getToken(LcypherParser.MERGE, 0)
+
+ def ON(self):
+ return self.getToken(LcypherParser.ON, 0)
+
+ def OPTIONAL_(self):
+ return self.getToken(LcypherParser.OPTIONAL_, 0)
+
+ def ORDER(self):
+ return self.getToken(LcypherParser.ORDER, 0)
+
+ def REMOVE(self):
+ return self.getToken(LcypherParser.REMOVE, 0)
+
+ def RETURN(self):
+ return self.getToken(LcypherParser.RETURN, 0)
+
+ def SET(self):
+ return self.getToken(LcypherParser.SET, 0)
+
+ def L_SKIP(self):
+ return self.getToken(LcypherParser.L_SKIP, 0)
+
+ def WHERE(self):
+ return self.getToken(LcypherParser.WHERE, 0)
+
+ def WITH(self):
+ return self.getToken(LcypherParser.WITH, 0)
+
+ def UNION(self):
+ return self.getToken(LcypherParser.UNION, 0)
+
+ def UNWIND(self):
+ return self.getToken(LcypherParser.UNWIND, 0)
+
+ def AND(self):
+ return self.getToken(LcypherParser.AND, 0)
+
+ def AS(self):
+ return self.getToken(LcypherParser.AS, 0)
+
+ def CONTAINS(self):
+ return self.getToken(LcypherParser.CONTAINS, 0)
+
+ def DISTINCT(self):
+ return self.getToken(LcypherParser.DISTINCT, 0)
+
+ def ENDS(self):
+ return self.getToken(LcypherParser.ENDS, 0)
+
+ def IN(self):
+ return self.getToken(LcypherParser.IN, 0)
+
+ def IS(self):
+ return self.getToken(LcypherParser.IS, 0)
+
+ def NOT(self):
+ return self.getToken(LcypherParser.NOT, 0)
+
+ def OR(self):
+ return self.getToken(LcypherParser.OR, 0)
+
+ def STARTS(self):
+ return self.getToken(LcypherParser.STARTS, 0)
+
+ def XOR(self):
+ return self.getToken(LcypherParser.XOR, 0)
+
+ def FALSE_(self):
+ return self.getToken(LcypherParser.FALSE_, 0)
+
+ def TRUE_(self):
+ return self.getToken(LcypherParser.TRUE_, 0)
+
+ def NULL_(self):
+ return self.getToken(LcypherParser.NULL_, 0)
+
+ def CONSTRAINT(self):
+ return self.getToken(LcypherParser.CONSTRAINT, 0)
+
+ def DO(self):
+ return self.getToken(LcypherParser.DO, 0)
+
+ def FOR(self):
+ return self.getToken(LcypherParser.FOR, 0)
+
+ def REQUIRE(self):
+ return self.getToken(LcypherParser.REQUIRE, 0)
+
+ def UNIQUE(self):
+ return self.getToken(LcypherParser.UNIQUE, 0)
+
+ def CASE(self):
+ return self.getToken(LcypherParser.CASE, 0)
+
+ def WHEN(self):
+ return self.getToken(LcypherParser.WHEN, 0)
+
+ def THEN(self):
+ return self.getToken(LcypherParser.THEN, 0)
+
+ def ELSE(self):
+ return self.getToken(LcypherParser.ELSE, 0)
+
+ def END(self):
+ return self.getToken(LcypherParser.END, 0)
+
+ def MANDATORY(self):
+ return self.getToken(LcypherParser.MANDATORY, 0)
+
+ def SCALAR(self):
+ return self.getToken(LcypherParser.SCALAR, 0)
+
+ def OF(self):
+ return self.getToken(LcypherParser.OF, 0)
+
+ def ADD(self):
+ return self.getToken(LcypherParser.ADD, 0)
+
+ def DROP(self):
+ return self.getToken(LcypherParser.DROP, 0)
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_ReservedWord
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_ReservedWord" ):
+ listener.enterOC_ReservedWord(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_ReservedWord" ):
+ listener.exitOC_ReservedWord(self)
+
+
+
+
+ def oC_ReservedWord(self):
+
+ localctx = LcypherParser.OC_ReservedWordContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 192, self.RULE_oC_ReservedWord)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1585
+ _la = self._input.LA(1)
+ if not((((_la) & ~0x3f) == 0 and ((1 << _la) & -6917810502617792512) != 0) or ((((_la - 64)) & ~0x3f) == 0 and ((1 << (_la - 64)) & -18014261578046465) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_LeftArrowHeadContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_LeftArrowHead
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_LeftArrowHead" ):
+ listener.enterOC_LeftArrowHead(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_LeftArrowHead" ):
+ listener.exitOC_LeftArrowHead(self)
+
+
+
+
+ def oC_LeftArrowHead(self):
+
+ localctx = LcypherParser.OC_LeftArrowHeadContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 194, self.RULE_oC_LeftArrowHead)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1587
+ _la = self._input.LA(1)
+ if not((((_la) & ~0x3f) == 0 and ((1 << _la) & 2013790208) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_RightArrowHeadContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_RightArrowHead
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_RightArrowHead" ):
+ listener.enterOC_RightArrowHead(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_RightArrowHead" ):
+ listener.exitOC_RightArrowHead(self)
+
+
+
+
+ def oC_RightArrowHead(self):
+
+ localctx = LcypherParser.OC_RightArrowHeadContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 196, self.RULE_oC_RightArrowHead)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1589
+ _la = self._input.LA(1)
+ if not((((_la) & ~0x3f) == 0 and ((1 << _la) & 32213303296) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+ class OC_DashContext(ParserRuleContext):
+ __slots__ = 'parser'
+
+ def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
+ super().__init__(parent, invokingState)
+ self.parser = parser
+
+
+ def getRuleIndex(self):
+ return LcypherParser.RULE_oC_Dash
+
+ def enterRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "enterOC_Dash" ):
+ listener.enterOC_Dash(self)
+
+ def exitRule(self, listener:ParseTreeListener):
+ if hasattr( listener, "exitOC_Dash" ):
+ listener.exitOC_Dash(self)
+
+
+
+
+ def oC_Dash(self):
+
+ localctx = LcypherParser.OC_DashContext(self, self._ctx, self.state)
+ self.enterRule(localctx, 198, self.RULE_oC_Dash)
+ self._la = 0 # Token type
+ try:
+ self.enterOuterAlt(localctx, 1)
+ self.state = 1591
+ _la = self._input.LA(1)
+ if not((((_la) & ~0x3f) == 0 and ((1 << _la) & 70334384455680) != 0)):
+ self._errHandler.recoverInline(self)
+ else:
+ self._errHandler.reportMatch(self)
+ self.consume()
+ except RecognitionException as re:
+ localctx.exception = re
+ self._errHandler.reportError(self, re)
+ self._errHandler.recover(self, re)
+ finally:
+ self.exitRule()
+ return localctx
+
+
+
+
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/__init__.py
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/__init__.py
@@ -0,0 +1 @@
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/grammar_evaluator.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/grammar_evaluator.py
new file mode 100644
index 00000000..7c6331da
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/impl/tugraph-db/grammar_evaluator.py
@@ -0,0 +1,47 @@
+import jaro
+import sys
+import os.path
+import antlr4
+from antlr4 import *
+from antlr4.error.ErrorListener import ErrorListener
+
+sys.path.append(os.path.dirname(__file__))
+from LcypherLexer import LcypherLexer
+from LcypherParser import LcypherParser
+
+
+class MyErrorListener(ErrorListener):
+ def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e):
+ raise Exception(
+ "ERROR: when parsing line %d column %d: %s\n" % (line, column, msg)
+ )
+
+
+class GrammarEvaluator:
+ def evaluate(self, query_predict, query_gold):
+ error_listener = MyErrorListener()
+ try:
+ input_stream = InputStream(query_gold)
+ lexer = LcypherLexer(input_stream)
+ lexer.removeErrorListeners()
+ lexer.addErrorListener(error_listener)
+ stream = CommonTokenStream(lexer)
+ parser = LcypherParser(stream)
+ parser.removeErrorListeners()
+ parser.addErrorListener(error_listener)
+ tree = parser.oC_Cypher()
+ try:
+ input_stream = InputStream(query_predict)
+ lexer = LcypherLexer(input_stream)
+ lexer.removeErrorListeners()
+ lexer.addErrorListener(error_listener)
+ stream = CommonTokenStream(lexer)
+ parser = LcypherParser(stream)
+ parser.removeErrorListeners()
+ parser.addErrorListener(error_listener)
+ tree = parser.oC_Cypher()
+ return 1
+ except Exception as e:
+ return 0
+ except Exception as e:
+ return -1
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/similarity_evaluator.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/similarity_evaluator.py
new file mode 100644
index 00000000..60f1f64d
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/eval/evaluator/similarity_evaluator.py
@@ -0,0 +1,6 @@
+import jaro
+
+
+class SimilarityEvaluator:
+ def evaluate(self, query_predict, query_gold):
+ return jaro.jaro_winkler_metric(query_predict, query_gold)
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/__init__.py
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/__init__.py
@@ -0,0 +1 @@
+
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/adapter.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/adapter.py
new file mode 100644
index 00000000..322ad44c
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/adapter.py
@@ -0,0 +1,116 @@
+import os
+from typing import TYPE_CHECKING
+
+import torch
+from peft import LoraConfig, PeftModel, TaskType, get_peft_model
+from peft.utils import CONFIG_NAME, WEIGHTS_NAME
+
+from .config_parser import load_trainable_params
+from .loggings import get_logger
+
+if TYPE_CHECKING:
+ from transformers.modeling_utils import PreTrainedModel
+
+logger = get_logger(__name__)
+
+
+def init_adapter(
+ model: "PreTrainedModel",
+ model_args: "ModelArguments",
+ finetuning_args: "FinetuningArguments",
+ is_trainable: bool,
+ is_mergeable: bool,
+) -> "PreTrainedModel":
+ r"""
+ Initializes the adapters.
+
+ Support full-parameter, freeze and LoRA ,QLoRA,training.
+
+ Note that the trainable parameters must be cast to float32.
+ """
+
+ if finetuning_args.finetuning_type == "none" and is_trainable:
+ raise ValueError("You cannot use finetuning_type=none while training.")
+
+ if finetuning_args.finetuning_type == "full" and is_trainable:
+ logger.info("Fine-tuning method: Full")
+ model = model.float()
+
+ if finetuning_args.finetuning_type == "freeze":
+ logger.info("Fine-tuning method: Freeze")
+
+ for name, param in model.named_parameters():
+ if not any(
+ trainable_layer in name
+ for trainable_layer in finetuning_args.trainable_layers
+ ):
+ param.requires_grad_(False)
+ else:
+ param.data = param.data.to(torch.float32)
+
+ if model_args.checkpoint_dir is not None:
+ assert load_trainable_params(
+ model, model_args.checkpoint_dir[0]
+ ), "Model checkpoint is not correctly loaded."
+
+ if finetuning_args.finetuning_type == "lora":
+ logger.info("Fine-tuning method: LoRA")
+ latest_checkpoint = None
+
+ if model_args.checkpoint_dir is not None:
+ assert os.path.exists(
+ os.path.join(model_args.checkpoint_dir[0], WEIGHTS_NAME)
+ ), "Provided path ({}) does not contain a LoRA weight.".format(
+ model_args.checkpoint_dir[0]
+ )
+ assert os.path.exists(
+ os.path.join(model_args.checkpoint_dir[0], CONFIG_NAME)
+ ), "The given checkpoint may be not a LoRA checkpoint, please specify `--finetuning_type full/freeze` instead."
+
+ if (is_trainable and finetuning_args.resume_lora_training) or (
+ not is_mergeable
+ ): # continually fine-tuning
+ checkpoints_to_merge, latest_checkpoint = (
+ model_args.checkpoint_dir[:-1],
+ model_args.checkpoint_dir[-1],
+ )
+ else:
+ checkpoints_to_merge = model_args.checkpoint_dir
+
+ for checkpoint in checkpoints_to_merge:
+ model = PeftModel.from_pretrained(model, checkpoint)
+ model = model.merge_and_unload()
+
+ if len(checkpoints_to_merge) > 0:
+ logger.info(
+ "Merged {} model checkpoint(s).".format(len(checkpoints_to_merge))
+ )
+
+ if (
+ latest_checkpoint is not None
+ ): # resume lora training or quantized inference
+ model = PeftModel.from_pretrained(
+ model, latest_checkpoint, is_trainable=is_trainable
+ )
+
+ if (
+ is_trainable and latest_checkpoint is None
+ ): # create new lora weights while training
+ lora_config = LoraConfig(
+ task_type=TaskType.CAUSAL_LM,
+ inference_mode=False,
+ r=finetuning_args.lora_rank,
+ lora_alpha=finetuning_args.lora_alpha,
+ lora_dropout=finetuning_args.lora_dropout,
+ target_modules=finetuning_args.lora_target,
+ )
+ model = get_peft_model(model, lora_config)
+
+ if model_args.checkpoint_dir is not None:
+ logger.info(
+ "Loaded fine-tuned model from checkpoint(s): {}".format(
+ ",".join(model_args.checkpoint_dir)
+ )
+ )
+
+ return model
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/chat_model.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/chat_model.py
new file mode 100644
index 00000000..82b0e0ba
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/chat_model.py
@@ -0,0 +1,125 @@
+from threading import Thread
+from typing import Any, Dict, Generator, List, Optional, Tuple
+
+import torch
+from transformers import GenerationConfig, TextIteratorStreamer
+
+from ..data_process.data_utils import get_template_and_fix_tokenizer
+from .config_parser import get_infer_args
+from .load_tokenizer import dispatch_model, load_model_and_tokenizer
+from .model_trainer import get_logits_processor
+
+
+class ChatModel:
+ def __init__(self, args: Optional[Dict[str, Any]] = None) -> None:
+ (
+ model_args,
+ self.data_args,
+ finetuning_args,
+ self.generating_args,
+ ) = get_infer_args(args)
+ self.model, self.tokenizer = load_model_and_tokenizer(
+ model_args, finetuning_args
+ )
+ self.tokenizer.padding_side = "left"
+ self.model = dispatch_model(self.model)
+ self.template = get_template_and_fix_tokenizer(
+ self.data_args.template, self.tokenizer
+ )
+ self.system_prompt = self.data_args.system_prompt
+
+ def process_args(
+ self,
+ query: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ **input_kwargs
+ ) -> Tuple[Dict[str, Any], int]:
+ system = system or self.system_prompt
+
+ prompt, _ = self.template.encode_oneturn(
+ tokenizer=self.tokenizer,
+ query=query,
+ resp="",
+ history=history,
+ system=system,
+ )
+ input_ids = torch.tensor([prompt], device=self.model.device)
+ prompt_length = len(input_ids[0])
+
+ do_sample = input_kwargs.pop("do_sample", None)
+ temperature = input_kwargs.pop("temperature", None)
+ top_p = input_kwargs.pop("top_p", None)
+ top_k = input_kwargs.pop("top_k", None)
+ repetition_penalty = input_kwargs.pop("repetition_penalty", None)
+ max_length = input_kwargs.pop("max_length", None)
+ max_new_tokens = input_kwargs.pop("max_new_tokens", None)
+
+ generating_args = self.generating_args.to_dict()
+ generating_args.update(
+ dict(
+ do_sample=do_sample
+ if do_sample is not None
+ else generating_args["do_sample"],
+ temperature=temperature or generating_args["temperature"],
+ top_p=top_p or generating_args["top_p"],
+ top_k=top_k or generating_args["top_k"],
+ repetition_penalty=repetition_penalty
+ or generating_args["repetition_penalty"],
+ eos_token_id=[self.tokenizer.eos_token_id]
+ + self.tokenizer.additional_special_tokens_ids,
+ pad_token_id=self.tokenizer.pad_token_id,
+ )
+ )
+
+ if max_length:
+ generating_args.pop("max_new_tokens", None)
+ generating_args["max_length"] = max_length
+
+ if max_new_tokens:
+ generating_args.pop("max_length", None)
+ generating_args["max_new_tokens"] = max_new_tokens
+
+ gen_kwargs = dict(
+ inputs=input_ids,
+ generation_config=GenerationConfig(**generating_args),
+ logits_processor=get_logits_processor(),
+ )
+
+ return gen_kwargs, prompt_length
+
+ @torch.inference_mode()
+ def chat(
+ self,
+ query: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ **input_kwargs
+ ) -> Tuple[str, Tuple[int, int]]:
+ gen_kwargs, prompt_length = self.process_args(
+ query, history, system, **input_kwargs
+ )
+ generation_output = self.model.generate(**gen_kwargs)
+ outputs = generation_output.tolist()[0][prompt_length:]
+ response = self.tokenizer.decode(outputs, skip_special_tokens=True)
+ response_length = len(outputs)
+ return response, (prompt_length, response_length)
+
+ @torch.inference_mode()
+ def stream_chat(
+ self,
+ query: str,
+ history: Optional[List[Tuple[str, str]]] = None,
+ system: Optional[str] = None,
+ **input_kwargs
+ ) -> Generator[str, None, None]:
+ gen_kwargs, _ = self.process_args(query, history, system, **input_kwargs)
+ streamer = TextIteratorStreamer(
+ self.tokenizer, timeout=60.0, skip_prompt=True, skip_special_tokens=True
+ )
+ gen_kwargs["streamer"] = streamer
+
+ thread = Thread(target=self.model.generate, kwargs=gen_kwargs)
+ thread.start()
+
+ yield from streamer
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/config_parser.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/config_parser.py
new file mode 100644
index 00000000..34ead7ad
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/config_parser.py
@@ -0,0 +1,258 @@
+import os
+import sys
+from typing import Any, Dict, Optional, Tuple
+
+import datasets
+import torch
+import transformers
+from transformers import HfArgumentParser, Seq2SeqTrainingArguments
+from transformers.modeling_utils import load_sharded_checkpoint
+from transformers.trainer import WEIGHTS_INDEX_NAME, WEIGHTS_NAME
+from transformers.trainer_utils import get_last_checkpoint
+
+from ..configs.data_args import DataArguments
+from ..configs.model_args import (
+ FinetuningArguments,
+ GeneratingArguments,
+ ModelArguments,
+)
+from .loggings import get_logger
+
+logger = get_logger(__name__)
+
+
+def get_state_dict(model: torch.nn.Module) -> Dict[str, torch.Tensor]:
+ state_dict: Dict[str, torch.Tensor] = model.state_dict()
+ filtered_state_dict = {}
+
+ for k, v in model.named_parameters():
+ if v.requires_grad:
+ filtered_state_dict[k] = state_dict[k].cpu().clone().detach()
+
+ return filtered_state_dict
+
+
+def load_trainable_params(model: torch.nn.Module, checkpoint_dir: os.PathLike) -> bool:
+ weights_file = os.path.join(checkpoint_dir, WEIGHTS_NAME)
+ if os.path.exists(weights_file):
+ model_state_dict = torch.load(weights_file, map_location="cpu")
+ model.load_state_dict(model_state_dict, strict=False) # skip missing keys
+ elif os.path.exists(os.path.join(checkpoint_dir, WEIGHTS_INDEX_NAME)):
+ load_sharded_checkpoint(model, checkpoint_dir, strict=False)
+ else:
+ logger.warning(
+ "Provided path ({}) does not contain pre-trained weights.".format(
+ checkpoint_dir
+ )
+ )
+ return False
+ return True
+
+
+def _parse_args(
+ parser: HfArgumentParser, args: Optional[Dict[str, Any]] = None
+) -> Tuple[Any]:
+ if args is not None:
+ return parser.parse_dict(args)
+ elif len(sys.argv) == 2 and sys.argv[1].endswith(".yaml"):
+ return parser.parse_yaml_file(os.path.abspath(sys.argv[1]))
+ elif len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
+ return parser.parse_json_file(os.path.abspath(sys.argv[1]))
+ else:
+ return parser.parse_args_into_dataclasses()
+
+
+def parse_train_args(
+ args: Optional[Dict[str, Any]] = None
+) -> Tuple[
+ ModelArguments,
+ DataArguments,
+ Seq2SeqTrainingArguments,
+ FinetuningArguments,
+ GeneratingArguments,
+]:
+ parser = HfArgumentParser(
+ (
+ ModelArguments,
+ DataArguments,
+ Seq2SeqTrainingArguments,
+ FinetuningArguments,
+ GeneratingArguments,
+ )
+ )
+ return _parse_args(parser, args)
+
+
+def parse_infer_args(
+ args: Optional[Dict[str, Any]] = None
+) -> Tuple[ModelArguments, DataArguments, FinetuningArguments, GeneratingArguments]:
+ parser = HfArgumentParser(
+ (ModelArguments, DataArguments, FinetuningArguments, GeneratingArguments)
+ )
+ return _parse_args(parser, args)
+
+
+def get_train_args(
+ args: Optional[Dict[str, Any]] = None, data_args_init: bool = True
+) -> Tuple[
+ ModelArguments,
+ DataArguments,
+ Seq2SeqTrainingArguments,
+ FinetuningArguments,
+ GeneratingArguments,
+]:
+ (
+ model_args,
+ data_args,
+ training_args,
+ finetuning_args,
+ generating_args,
+ ) = parse_train_args(args)
+
+ # Setup logging
+ if training_args.should_log:
+ # The default of training_args.log_level is passive, so we set log level at info here to have that default.
+ transformers.utils.logging.set_verbosity_info()
+
+ log_level = training_args.get_process_log_level()
+ datasets.utils.logging.set_verbosity(log_level)
+ transformers.utils.logging.set_verbosity(log_level)
+ transformers.utils.logging.enable_default_handler()
+ transformers.utils.logging.enable_explicit_format()
+
+ # Check arguments (do not check finetuning_args since it may be loaded from checkpoints)
+ if data_args_init:
+ data_args.init_for_training()
+
+ if training_args.max_steps == -1 and data_args.streaming:
+ raise ValueError("Please specify `max_steps` in streaming mode.")
+
+ if data_args.val_size > 1e-6 and data_args.val_size < 1 and data_args.streaming:
+ raise ValueError("Streaming mode should have an integer val size.")
+
+ if training_args.do_train and training_args.predict_with_generate:
+ raise ValueError(
+ "`predict_with_generate` cannot be set as True while training."
+ )
+
+ if (
+ training_args.do_train
+ and finetuning_args.finetuning_type == "lora"
+ and finetuning_args.lora_target is None
+ ):
+ raise ValueError("Please specify `lora_target` in LoRA training.")
+
+ if (
+ model_args.quantization_bit is not None
+ and finetuning_args.finetuning_type != "lora"
+ ):
+ raise ValueError("Quantization is only compatible with the LoRA method.")
+
+ if model_args.checkpoint_dir is not None:
+ if finetuning_args.finetuning_type != "lora":
+ if len(model_args.checkpoint_dir) != 1:
+ raise ValueError("Only LoRA tuning accepts multiple checkpoints.")
+ elif (
+ model_args.quantization_bit is not None
+ and len(model_args.checkpoint_dir) != 1
+ ):
+ raise ValueError("Quantized model only accepts a single checkpoint.")
+
+ if model_args.quantization_bit is not None and (not training_args.do_train):
+ logger.warning("Evaluating model in 4/8-bit mode may cause lower scores.")
+
+ if training_args.do_train and (not training_args.fp16) and (not training_args.bf16):
+ logger.warning("We recommend enable mixed precision training.")
+
+ # postprocess data_args
+ if data_args.max_samples is not None and data_args.streaming:
+ logger.warning(
+ "`max_samples` is incompatible with `streaming`. Disabling max_samples."
+ )
+ data_args.max_samples = None
+
+ # postprocess training_args
+ if (
+ training_args.local_rank != -1
+ and training_args.ddp_find_unused_parameters is None
+ and finetuning_args.finetuning_type == "lora"
+ ):
+ logger.warning(
+ "`ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training."
+ )
+ training_args_dict = training_args.to_dict()
+ training_args_dict.update(dict(ddp_find_unused_parameters=False))
+ training_args = Seq2SeqTrainingArguments(**training_args_dict)
+
+ if (
+ training_args.resume_from_checkpoint is None
+ and training_args.do_train
+ and os.path.isdir(training_args.output_dir)
+ and not training_args.overwrite_output_dir
+ ):
+ last_checkpoint = get_last_checkpoint(training_args.output_dir)
+ if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
+ raise ValueError(
+ "Output directory already exists and is not empty. Use `overwrite_output_dir`."
+ )
+
+ if last_checkpoint is not None:
+ training_args_dict = training_args.to_dict()
+ training_args_dict.update(dict(resume_from_checkpoint=last_checkpoint))
+ training_args = Seq2SeqTrainingArguments(**training_args_dict)
+ logger.info(
+ "Resuming from checkpoint. Change `output_dir` or use `overwrite_output_dir` to avoid."
+ )
+
+ # postprocess model_args
+ if training_args.bf16:
+ if not torch.cuda.is_bf16_supported():
+ raise ValueError("Current device does not support bf16 training.")
+ model_args.compute_dtype = torch.bfloat16
+ else:
+ model_args.compute_dtype = torch.float16
+
+ model_args.model_max_length = (
+ data_args.max_source_length + data_args.max_target_length
+ )
+
+ # Log on each process the small summary:
+ logger.info(
+ "Process rank: {}, device: {}, n_gpu: {}\n distributed training: {}, compute dtype: {}".format(
+ training_args.local_rank,
+ training_args.device,
+ training_args.n_gpu,
+ bool(training_args.local_rank != -1),
+ str(model_args.compute_dtype),
+ )
+ )
+ logger.info(f"Training/evaluation parameters {training_args}")
+
+ # Set seed before initializing model.
+ transformers.set_seed(training_args.seed)
+
+ return model_args, data_args, training_args, finetuning_args, generating_args
+
+
+def get_infer_args(
+ args: Optional[Dict[str, Any]] = None
+) -> Tuple[ModelArguments, DataArguments, FinetuningArguments, GeneratingArguments]:
+ model_args, data_args, finetuning_args, generating_args = parse_infer_args(args)
+
+ if (
+ model_args.quantization_bit is not None
+ and finetuning_args.finetuning_type != "lora"
+ ):
+ raise ValueError("Quantization is only compatible with the LoRA method.")
+
+ if model_args.checkpoint_dir is not None:
+ if finetuning_args.finetuning_type != "lora":
+ if len(model_args.checkpoint_dir) != 1:
+ raise ValueError("Only LoRA tuning accepts multiple checkpoints.")
+ elif (
+ model_args.quantization_bit is not None
+ and len(model_args.checkpoint_dir) != 1
+ ):
+ raise ValueError("Quantized model only accepts a single checkpoint.")
+
+ return model_args, data_args, finetuning_args, generating_args
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/load_tokenizer.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/load_tokenizer.py
new file mode 100644
index 00000000..5bd16fc3
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/load_tokenizer.py
@@ -0,0 +1,401 @@
+import inspect
+import math
+import os
+from types import MethodType
+from typing import TYPE_CHECKING, Dict, List, Literal, Optional, Tuple
+
+import torch
+from transformers import (
+ AutoConfig,
+ AutoModelForCausalLM,
+ AutoTokenizer,
+ BitsAndBytesConfig,
+ PretrainedConfig,
+ PreTrainedModel,
+ PreTrainedTokenizer,
+ PreTrainedTokenizerBase,
+)
+from transformers.deepspeed import is_deepspeed_zero3_enabled
+from transformers.trainer import SAFE_WEIGHTS_NAME, WEIGHTS_NAME
+from transformers.utils import cached_file, check_min_version
+from transformers.utils.versions import require_version
+from trl import AutoModelForCausalLMWithValueHead
+
+from ..configs.config import LAYERNORM_NAMES, VALUE_HEAD_FILE_NAME
+from ..configs.model_args import FinetuningArguments
+from .adapter import init_adapter
+from .loggings import get_logger, reset_logging
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedTokenizer
+
+ from ..configs.model_args import ModelArguments
+
+
+logger = get_logger(__name__)
+
+
+check_min_version("4.29.1")
+require_version("datasets>=2.12.0", "To fix: pip install datasets>=2.12.0")
+require_version("accelerate>=0.21.0", "To fix: pip install accelerate>=0.21.0")
+require_version("peft>=0.4.0", "To fix: pip install peft>=0.4.0")
+require_version("trl>=0.5.0", "To fix: pip install trl>=0.5.0")
+
+
+def count_parameters(model: torch.nn.Module) -> Tuple[int, int]:
+ r"""
+ Returns the number of trainable parameters and number of all parameters in the model.
+ """
+ trainable_params, all_param = 0, 0
+ for param in model.parameters():
+ num_params = param.numel()
+ # if using DS Zero 3 and the weights are initialized empty
+ if num_params == 0 and hasattr(param, "ds_numel"):
+ num_params = param.ds_numel
+
+ # Due to the design of 4bit linear layers from bitsandbytes, multiply the number of parameters by 2
+ if param.__class__.__name__ == "Params4bit":
+ num_params = num_params * 2
+
+ all_param += num_params
+ if param.requires_grad:
+ trainable_params += num_params
+
+ return trainable_params, all_param
+
+
+# Includes: (1) cast the layernorm in fp32 (2) make output embedding layer require grads (3) upcast the lm_head to fp32
+# Inspired by: https://github.com/huggingface/peft/blob/c0209c35abbf88c63aa267800d98a8e212ed0a42/src/peft/utils/other.py#L35
+def prepare_model_for_training(
+ model: "PreTrainedModel",
+ finetuning_type: str,
+ output_layer_name: Optional[str] = "lm_head",
+ use_gradient_checkpointing: Optional[bool] = True,
+ layer_norm_names: Optional[List[str]] = LAYERNORM_NAMES,
+) -> "PreTrainedModel":
+ for name, param in model.named_parameters():
+ if param.ndim == 1 and any(
+ layer_norm_name in name for layer_norm_name in layer_norm_names
+ ):
+ param.data = param.data.to(torch.float32)
+
+ if use_gradient_checkpointing:
+ if hasattr(model, "enable_input_require_grads"):
+ model.enable_input_require_grads()
+ else:
+
+ def make_inputs_require_grad(module, input, output):
+ output.requires_grad_(True)
+
+ model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
+
+ model.gradient_checkpointing_enable()
+ model.config.use_cache = (
+ False # turn off when gradient checkpointing is enabled
+ )
+
+ if finetuning_type != "full" and hasattr(model, output_layer_name):
+ output_layer: torch.nn.Linear = getattr(model, output_layer_name)
+ input_dtype = output_layer.weight.dtype
+
+ class CastOutputToFloat(torch.nn.Sequential):
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
+ return super().forward(x.to(input_dtype)).to(torch.float32)
+
+ setattr(model, output_layer_name, CastOutputToFloat(output_layer))
+
+ return model
+
+
+def load_valuehead_params(
+ path_or_repo_id: str, model_args: "ModelArguments"
+) -> Dict[str, torch.Tensor]:
+ r"""
+ Loads value head parameters from Hugging Face Hub or local disk.
+
+ Returns: dict with keys `v_head.summary.weight` and `v_head.summary.bias`.
+ """
+ kwargs = {"path_or_repo_id": path_or_repo_id, "cache_dir": model_args.cache_dir}
+
+ if "token" in inspect.signature(cached_file).parameters:
+ kwargs["token"] = model_args.hf_hub_token
+ elif (
+ "use_auth_token" in inspect.signature(cached_file).parameters
+ ): # for transformers==4.31.0
+ kwargs["use_auth_token"] = model_args.hf_hub_token
+ else:
+ logger.warning("Ignore `hf_hub_token` since matched parameter is not found.")
+
+ try:
+ vhead_file = cached_file(filename=WEIGHTS_NAME, **kwargs)
+ return torch.load(vhead_file, map_location="cpu")
+ except Exception as err:
+ logger.info("Failed to load {}: {}".format(WEIGHTS_NAME, str(err)))
+
+ try:
+ from safetensors import safe_open
+
+ vhead_file = cached_file(filename=SAFE_WEIGHTS_NAME, **kwargs)
+ with safe_open(vhead_file, framework="pt", device="cpu") as f:
+ return {
+ "v_head.summary.weight": f.get_tensor("v_head.summary.weight"),
+ "v_head.summary.bias": f.get_tensor("v_head.summary.bias"),
+ }
+ except Exception as err:
+ logger.info("Failed to load {}: {}".format(SAFE_WEIGHTS_NAME, str(err)))
+
+ logger.warning(
+ "Provided path ({}) does not contain valuehead weights.".format(path_or_repo_id)
+ )
+ return None
+
+
+def load_model_and_tokenizer(
+ model_args: "ModelArguments",
+ finetuning_args: "FinetuningArguments",
+ is_trainable: Optional[bool] = False,
+ add_valuehead: Optional[bool] = False,
+) -> Tuple[PreTrainedModel, "PreTrainedTokenizer"]:
+ r"""
+ Loads pretrained model and tokenizer.
+
+ Support both training and inference.
+ """
+ if (not is_trainable) and model_args.checkpoint_dir is None:
+ logger.warning(
+ "Checkpoint is not found at evaluation, load the original model."
+ )
+ finetuning_args = FinetuningArguments(finetuning_type="none")
+
+ config_kwargs = {
+ "trust_remote_code": True,
+ "cache_dir": model_args.cache_dir,
+ "revision": model_args.model_revision,
+ "use_auth_token": True if model_args.use_auth_token else None,
+ }
+
+ tokenizer = AutoTokenizer.from_pretrained(
+ model_args.model_name_or_path,
+ use_fast=model_args.use_fast_tokenizer,
+ split_special_tokens=model_args.split_special_tokens,
+ padding_side="right", # training with left-padded tensors in fp16 precision may cause overflow
+ **config_kwargs
+ )
+
+ if (
+ finetuning_args.finetuning_type == "full"
+ and model_args.checkpoint_dir is not None
+ ):
+ model_to_load = model_args.checkpoint_dir[0]
+ else:
+ model_to_load = model_args.model_name_or_path
+
+ config = AutoConfig.from_pretrained(model_to_load, **config_kwargs)
+
+ if hasattr(config, "fp16") and hasattr(config, "bf16"): # fix Qwen config
+ if model_args.compute_dtype == torch.bfloat16:
+ setattr(config, "bf16", True)
+ else:
+ setattr(config, "fp16", True)
+
+ # Fix config (for Qwen)
+ if getattr(config, "model_type", None) == "qwen":
+ for dtype_name, dtype in [
+ ("fp16", torch.float16),
+ ("bf16", torch.bfloat16),
+ ("fp32", torch.float32),
+ ]:
+ setattr(config, dtype_name, getattr(config, "torch_dtype", None) == dtype)
+
+ # Set RoPE scaling
+ if model_args.rope_scaling is not None:
+ if hasattr(config, "use_dynamic_ntk"): # for Qwen models
+ if is_trainable:
+ logger.warning("Qwen model does not support RoPE scaling in training.")
+ else:
+ setattr(config, "use_dynamic_ntk", True)
+ setattr(config, "use_logn_attn", True)
+ logger.info("Using dynamic NTK scaling.")
+
+ elif hasattr(config, "rope_scaling"): # for LLaMA models
+ require_version(
+ "transformers>=4.31.0", "RoPE scaling requires transformers>=4.31.0"
+ )
+
+ if is_trainable:
+ if model_args.rope_scaling == "dynamic":
+ logger.warning(
+ "Dynamic NTK may not work well with fine-tuning. "
+ "See: https://github.com/huggingface/transformers/pull/24653"
+ )
+
+ current_max_length = getattr(config, "max_position_embeddings", None)
+ if (
+ current_max_length
+ and model_args.model_max_length > current_max_length
+ ):
+ scaling_factor = float(
+ math.ceil(model_args.model_max_length / current_max_length)
+ )
+ else:
+ logger.warning(
+ "Input length is smaller than max length. Consider increase input length."
+ )
+ scaling_factor = 1.0
+ else:
+ scaling_factor = 2.0
+
+ setattr(
+ config,
+ "rope_scaling",
+ {"type": model_args.rope_scaling, "factor": scaling_factor},
+ )
+ logger.info(
+ "Using {} scaling strategy and setting scaling factor to {}".format(
+ model_args.rope_scaling, scaling_factor
+ )
+ )
+
+ else:
+ logger.warning("Current model does not support RoPE scaling.")
+
+ # Quantization configurations (using bitsandbytes library).
+ is_mergeable = True
+ if model_args.quantization_bit is not None:
+ if is_deepspeed_zero3_enabled():
+ raise ValueError("DeepSpeed ZeRO-3 is incompatible with quantization.")
+
+ if model_args.quantization_bit == 8:
+ require_version(
+ "bitsandbytes>=0.37.0", "To fix: pip install bitsandbytes>=0.37.0"
+ )
+ # config_kwargs["load_in_8bit"] = True
+ config_kwargs["quantization_config"] = BitsAndBytesConfig(load_in_8bit=True)
+
+ elif model_args.quantization_bit == 4:
+ require_version(
+ "bitsandbytes>=0.39.0", "To fix: pip install bitsandbytes>=0.39.0"
+ )
+ # config_kwargs["load_in_4bit"] = True
+ config_kwargs["quantization_config"] = BitsAndBytesConfig(
+ load_in_4bit=True,
+ bnb_4bit_compute_dtype=model_args.compute_dtype,
+ bnb_4bit_use_double_quant=model_args.double_quantization,
+ bnb_4bit_quant_type=model_args.quantization_type,
+ )
+
+ is_mergeable = False
+ config_kwargs["device_map"] = (
+ {"": int(os.environ.get("LOCAL_RANK", "0"))} if is_trainable else "auto"
+ )
+ logger.info("Quantizing model to {} bit.".format(model_args.quantization_bit))
+
+ # Load and prepare pre-trained models (without valuehead).
+ model = AutoModelForCausalLM.from_pretrained(
+ model_to_load,
+ config=config,
+ torch_dtype=model_args.compute_dtype,
+ low_cpu_mem_usage=(not is_deepspeed_zero3_enabled()),
+ **config_kwargs
+ )
+
+ # Disable custom generate method (for Qwen)
+ if "GenerationMixin" not in str(model.generate.__func__):
+ model.generate = MethodType(PreTrainedModel.generate, model)
+
+ # Fix LM head (for ChatGLM2,ChatGLM3)
+ if not hasattr(model, "lm_head") and hasattr(model, "transformer"):
+ setattr(model, "lm_head", model.transformer.output_layer)
+
+ # Register auto class to save the custom code files.
+ if isinstance(config, PretrainedConfig) and "AutoConfig" in getattr(
+ config, "auto_map", {}
+ ):
+ config.__class__.register_for_auto_class()
+ if isinstance(model, PreTrainedModel) and "AutoModelForCausalLM" in getattr(
+ config, "auto_map", {}
+ ):
+ model.__class__.register_for_auto_class()
+ if isinstance(
+ tokenizer, PreTrainedTokenizerBase
+ ) and "AutoTokenizer" in tokenizer.init_kwargs.get("auto_map", {}):
+ tokenizer.__class__.register_for_auto_class()
+
+ # Initialize adapters
+ model = (
+ prepare_model_for_training(model, finetuning_args.finetuning_type)
+ if is_trainable
+ else model
+ )
+ model = init_adapter(model, model_args, finetuning_args, is_trainable, is_mergeable)
+
+ # Prepare model with valuehead for RLHF
+ if add_valuehead:
+ model: "AutoModelForCausalLMWithValueHead" = (
+ AutoModelForCausalLMWithValueHead.from_pretrained(model)
+ )
+ ignore_modules = [
+ name for name, _ in model.named_parameters() if "pretrained_model" in name
+ ]
+ setattr(model, "_keys_to_ignore_on_save", ignore_modules)
+ setattr(
+ model, "tie_weights", MethodType(lambda _: None, model)
+ ) # use empty method
+ vhead_path = (
+ model_args.checkpoint_dir[-1]
+ if model_args.checkpoint_dir is not None
+ else model_args.model_name_or_path
+ )
+ vhead_params = load_valuehead_params(vhead_path, model_args)
+ if vhead_params is not None:
+ model.load_state_dict(vhead_params, strict=False)
+ logger.info("Loaded valuehead from checkpoint: {}".format(vhead_path))
+
+ # Prepare model for inference
+ if not is_trainable:
+ model.requires_grad_(False) # fix all model params
+ infer_dtype = (
+ torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
+ ) # detect cuda capability
+ model = model.to(infer_dtype) if model_args.quantization_bit is None else model
+
+ trainable_params, all_param = count_parameters(model)
+ logger.info(
+ "trainable params: {:d} || all params: {:d} || trainable%: {:.4f}".format(
+ trainable_params, all_param, 100 * trainable_params / all_param
+ )
+ )
+
+ return model, tokenizer
+
+
+def dispatch_model(model: "PreTrainedModel") -> "PreTrainedModel":
+ r"""
+ Dispatches a pre-trained model to GPUs with balanced memory.
+ Borrowed from: https://github.com/huggingface/transformers/blob/v4.31.0/src/transformers/modeling_utils.py#L2803
+ """
+ if getattr(model, "is_loaded_in_8bit", False) or getattr(
+ model, "is_loaded_in_4bit", False
+ ): # do nothing
+ return model
+
+ if torch.cuda.device_count() > 1:
+ from accelerate import dispatch_model
+ from accelerate.utils import get_balanced_memory, infer_auto_device_map
+
+ if model._no_split_modules is None:
+ raise ValueError(
+ "The model class needs to implement the `_no_split_modules` attribute."
+ )
+
+ kwargs = {
+ "dtype": model.dtype,
+ "no_split_module_classes": model._no_split_modules,
+ }
+ max_memory = get_balanced_memory(model, **kwargs)
+ # Make sure tied weights are tied before creating the device map.
+ model.tie_weights()
+ device_map = infer_auto_device_map(model, max_memory=max_memory, **kwargs)
+ return dispatch_model(model, device_map)
+ else:
+ return model.cuda()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/loggings.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/loggings.py
new file mode 100644
index 00000000..b33270fb
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/loggings.py
@@ -0,0 +1,229 @@
+import json
+import logging
+import os
+import sys
+import time
+from datetime import timedelta
+from typing import TYPE_CHECKING
+
+from transformers import TrainerCallback
+from transformers.trainer_utils import has_length
+
+from ..configs.config import LOG_FILE_NAME
+
+if TYPE_CHECKING:
+ from transformers import TrainerControl, TrainerState, TrainingArguments
+
+
+def reset_logging():
+ r"""
+ Removes basic config of root logger
+ """
+ root = logging.getLogger()
+ list(map(root.removeHandler, root.handlers))
+ list(map(root.removeFilter, root.filters))
+
+
+def get_logger(name: str) -> logging.Logger:
+ formatter = logging.Formatter(
+ fmt="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
+ datefmt="%m/%d/%Y %H:%M:%S",
+ )
+ handler = logging.StreamHandler(sys.stdout)
+ handler.setFormatter(formatter)
+
+ logger = logging.getLogger(name)
+ logger.setLevel(logging.INFO)
+ logger.addHandler(handler)
+
+ return logger
+
+
+logger = get_logger(__name__)
+
+
+class LoggerHandler(logging.Handler):
+ def __init__(self):
+ super().__init__()
+ self.log = ""
+
+ def reset(self):
+ self.log = ""
+
+ def emit(self, record):
+ if record.name == "httpx":
+ return
+ log_entry = self.format(record)
+ self.log += log_entry
+ self.log += "\n\n"
+
+
+class LogCallback(TrainerCallback):
+ def __init__(self, runner=None):
+ self.runner = runner
+ self.in_training = False
+ self.start_time = time.time()
+ self.cur_steps = 0
+ self.max_steps = 0
+ self.elapsed_time = ""
+ self.remaining_time = ""
+
+ def timing(self):
+ cur_time = time.time()
+ elapsed_time = cur_time - self.start_time
+ avg_time_per_step = elapsed_time / self.cur_steps if self.cur_steps != 0 else 0
+ remaining_time = (self.max_steps - self.cur_steps) * avg_time_per_step
+ self.elapsed_time = str(timedelta(seconds=int(elapsed_time)))
+ self.remaining_time = str(timedelta(seconds=int(remaining_time)))
+
+ def on_train_begin(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called at the beginning of training.
+ """
+ if state.is_local_process_zero:
+ self.in_training = True
+ self.start_time = time.time()
+ self.max_steps = state.max_steps
+ if os.path.exists(os.path.join(args.output_dir, LOG_FILE_NAME)):
+ logger.warning("Previous log file in this folder will be deleted.")
+ os.remove(os.path.join(args.output_dir, LOG_FILE_NAME))
+
+ def on_train_end(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called at the end of training.
+ """
+ if state.is_local_process_zero:
+ self.in_training = False
+ self.cur_steps = 0
+ self.max_steps = 0
+
+ def on_substep_end(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called at the end of an substep during gradient accumulation.
+ """
+ if (
+ state.is_local_process_zero
+ and self.runner is not None
+ and self.runner.aborted
+ ):
+ control.should_epoch_stop = True
+ control.should_training_stop = True
+
+ def on_step_end(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called at the end of a training step.
+ """
+ if state.is_local_process_zero:
+ self.cur_steps = state.global_step
+ self.timing()
+ if self.runner is not None and self.runner.aborted:
+ control.should_epoch_stop = True
+ control.should_training_stop = True
+
+ def on_evaluate(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called after an evaluation phase.
+ """
+ if state.is_local_process_zero and not self.in_training:
+ self.cur_steps = 0
+ self.max_steps = 0
+
+ def on_predict(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ *other,
+ **kwargs
+ ):
+ r"""
+ Event called after a successful prediction.
+ """
+ if state.is_local_process_zero and not self.in_training:
+ self.cur_steps = 0
+ self.max_steps = 0
+
+ def on_log(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ) -> None:
+ r"""
+ Event called after logging the last logs.
+ """
+ if not state.is_local_process_zero:
+ return
+
+ logs = dict(
+ current_steps=self.cur_steps,
+ total_steps=self.max_steps,
+ loss=state.log_history[-1].get("loss", None),
+ eval_loss=state.log_history[-1].get("eval_loss", None),
+ predict_loss=state.log_history[-1].get("predict_loss", None),
+ reward=state.log_history[-1].get("reward", None),
+ learning_rate=state.log_history[-1].get("learning_rate", None),
+ epoch=state.log_history[-1].get("epoch", None),
+ percentage=round(self.cur_steps / self.max_steps * 100, 2)
+ if self.max_steps != 0
+ else 100,
+ elapsed_time=self.elapsed_time,
+ remaining_time=self.remaining_time,
+ )
+ os.makedirs(args.output_dir, exist_ok=True)
+ with open(
+ os.path.join(args.output_dir, "trainer_log.jsonl"), "a", encoding="utf-8"
+ ) as f:
+ f.write(json.dumps(logs) + "\n")
+
+ def on_prediction_step(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs
+ ):
+ r"""
+ Event called after a prediction step.
+ """
+ eval_dataloader = kwargs.pop("eval_dataloader", None)
+ if (
+ state.is_local_process_zero
+ and has_length(eval_dataloader)
+ and not self.in_training
+ ):
+ if self.max_steps == 0:
+ self.max_steps = len(eval_dataloader)
+ self.cur_steps += 1
+ self.timing()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/model_trainer.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/model_trainer.py
new file mode 100644
index 00000000..0a56f7f3
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/llm_base/model_trainer.py
@@ -0,0 +1,412 @@
+import json
+import math
+import os
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Tuple, Union
+
+import jieba
+import matplotlib.pyplot as plt
+import numpy as np
+import torch
+import torch.nn as nn
+from nltk.translate.bleu_score import SmoothingFunction, sentence_bleu
+from peft import PeftModel
+from rouge_chinese import Rouge
+from transformers import Seq2SeqTrainer
+from transformers.generation.logits_process import LogitsProcessor
+from transformers.generation.utils import LogitsProcessorList
+from transformers.modeling_utils import (
+ PreTrainedModel,
+ load_sharded_checkpoint,
+ unwrap_model,
+)
+from transformers.trainer import (
+ TRAINER_STATE_NAME,
+ TRAINING_ARGS_NAME,
+ WEIGHTS_INDEX_NAME,
+ WEIGHTS_NAME,
+)
+from trl import PreTrainedModelWrapper
+
+from ..configs.config import FINETUNING_ARGS_NAME, IGNORE_INDEX, VALUE_HEAD_FILE_NAME
+from .config_parser import get_state_dict, get_train_args, load_trainable_params
+from .load_tokenizer import load_model_and_tokenizer
+from .loggings import get_logger
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedTokenizer, Seq2SeqTrainingArguments, TrainerState
+ from transformers.trainer import PredictionOutput
+
+ from ..configs.model_args import FinetuningArguments
+
+
+logger = get_logger(__name__)
+
+
+class PeftModelMixin:
+ r"""
+ Patches the save and load methods in Hugging Face Trainer for PeftModel and ModelWithValueHead.
+ """
+
+ def __init__(self) -> None: # for type checking
+ self.model: PreTrainedModel = None
+ self.tokenizer: "PreTrainedTokenizer" = None
+ self.args: "Seq2SeqTrainingArguments" = None
+ self.finetuning_args: "FinetuningArguments" = None
+ self.state: "TrainerState" = None
+ raise AssertionError("Mixin should not be initialized.")
+
+ def _save(
+ self,
+ output_dir: Optional[str] = None,
+ state_dict: Optional[Dict[str, torch.Tensor]] = None,
+ ) -> None:
+ r"""
+ Saves trainable parameters as model checkpoint.
+
+ This function will only be executed at the process zero.
+
+ Subclass and override to inject custom behavior. It should not be directly used by external scripts.
+ """
+ output_dir = output_dir if output_dir is not None else self.args.output_dir
+ os.makedirs(output_dir, exist_ok=True)
+ logger.info(f"Saving model checkpoint to {output_dir}")
+
+ model = unwrap_model(self.model)
+ if isinstance(model, PreTrainedModelWrapper):
+ # Custom state dict: https://github.com/lvwerra/trl/blob/v0.4.7/trl/models/modeling_value_head.py#L200
+ model_state_dict = state_dict or model.state_dict()
+ v_head_state_dict = {
+ name.replace("v_head.", ""): model_state_dict[name]
+ .cpu()
+ .clone()
+ .detach()
+ for name in model_state_dict.keys()
+ if name.startswith("v_head.")
+ }
+
+ torch.save(
+ v_head_state_dict, os.path.join(output_dir, VALUE_HEAD_FILE_NAME)
+ )
+ model = model.pretrained_model
+
+ state_dict = state_dict or get_state_dict(model)
+ if isinstance(model, (PeftModel, PreTrainedModel)):
+ model.config.use_cache = True
+ model.save_pretrained(
+ output_dir,
+ state_dict=state_dict,
+ safe_serialization=self.args.save_safetensors,
+ )
+ model.config.use_cache = False
+ else:
+ torch.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME))
+
+ if (
+ self.finetuning_args.finetuning_type == "full"
+ and self.tokenizer is not None
+ ):
+ try:
+ self.tokenizer.save_pretrained(output_dir)
+ except:
+ logger.warning("Cannot save tokenizer, copy the files manually.")
+
+ with open(
+ os.path.join(output_dir, TRAINING_ARGS_NAME), "w", encoding="utf-8"
+ ) as f:
+ f.write(self.args.to_json_string() + "\n")
+
+ self.finetuning_args.save_to_json(
+ os.path.join(output_dir, FINETUNING_ARGS_NAME)
+ )
+
+ def _load_best_model(self):
+ r"""
+ Loads trainable parameters from model checkpoint.
+
+ Subclass and override to inject custom behavior. It should not be directly used by external scripts.
+ """
+ logger.info(
+ f"Loading best model from {self.state.best_model_checkpoint} (score: {self.state.best_metric})."
+ )
+ model = unwrap_model(self.model)
+
+ if isinstance(model, PreTrainedModelWrapper):
+ model.v_head.load_state_dict(
+ torch.load(
+ os.path.join(
+ self.state.best_model_checkpoint, VALUE_HEAD_FILE_NAME
+ ),
+ map_location="cpu",
+ )
+ )
+ model = model.pretrained_model
+
+ if isinstance(model, PeftModel):
+ model.load_adapter(self.state.best_model_checkpoint, model.active_adapter)
+ else: # freeze/full-tuning
+ load_trainable_params(model, self.state.best_model_checkpoint)
+
+
+class PeftTrainer(PeftModelMixin, Seq2SeqTrainer):
+ r"""
+ Inherits Seq2SeqTrainer to support parameter-efficient checkpoints.
+ """
+
+ def __init__(self, finetuning_args: "FinetuningArguments", **kwargs):
+ Seq2SeqTrainer.__init__(self, **kwargs)
+ self.finetuning_args = finetuning_args
+
+
+class Seq2SeqPeftTrainer(PeftTrainer):
+ r"""
+ Inherits PeftTrainer to compute generative metrics such as BLEU and ROUGE.
+ """
+
+ def prediction_step(
+ self,
+ model: nn.Module,
+ inputs: Dict[str, Union[torch.Tensor, Any]],
+ prediction_loss_only: bool,
+ ignore_keys: Optional[List[str]] = None,
+ ) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
+ r"""
+ Removes the prompt part in the generated tokens.
+
+ Subclass and override to inject custom behavior.
+ """
+ prompt_len, label_len = inputs["input_ids"].size(-1), inputs["labels"].size(-1)
+ if prompt_len > label_len:
+ inputs["labels"] = self._pad_tensors_to_target_len(
+ inputs["labels"], inputs["input_ids"]
+ )
+ if label_len > prompt_len:
+ inputs["input_ids"] = self._pad_tensors_to_target_len(
+ inputs["input_ids"], inputs["labels"]
+ )
+ if "attention_mask" in inputs:
+ inputs["attention_mask"] = self._pad_tensors_to_target_len(
+ inputs["attention_mask"], inputs["labels"], pad_token_id=0
+ )
+ if "position_ids" in inputs:
+ inputs["position_ids"] = self._pad_tensors_to_target_len(
+ inputs["position_ids"], inputs["labels"], pad_token_id=0
+ )
+
+ loss, generated_tokens, labels = super().prediction_step(
+ model,
+ inputs,
+ prediction_loss_only=prediction_loss_only,
+ ignore_keys=ignore_keys,
+ )
+ if generated_tokens is not None:
+ generated_tokens[
+ :, : max(prompt_len, label_len)
+ ] = self.tokenizer.pad_token_id * torch.ones_like(
+ generated_tokens[:, : max(prompt_len, label_len)]
+ )
+
+ return loss, generated_tokens, labels
+
+ def _pad_tensors_to_target_len(
+ self,
+ src_tensor: torch.Tensor,
+ tgt_tensor: torch.Tensor,
+ pad_token_id: Optional[int] = None,
+ ) -> torch.Tensor:
+ r"""
+ Pads the tensor to the same length as the target tensor.
+
+ Should only be called when predict_with_generate=True.
+ """
+ if pad_token_id is None:
+ if self.tokenizer is not None and hasattr(self.tokenizer, "pad_token_id"):
+ assert (
+ self.tokenizer.padding_side == "left"
+ ), "This method only accepts left-padded tensor."
+ pad_token_id = self.tokenizer.pad_token_id
+ else:
+ raise ValueError("PAD token is required.")
+
+ padded_tensor = pad_token_id * torch.ones_like(tgt_tensor)
+ padded_tensor[:, -src_tensor.shape[-1] :] = src_tensor # adopt left-padding
+ return padded_tensor.contiguous() # in contiguous memory
+
+ def save_predictions(self, predict_results: "PredictionOutput") -> None:
+ r"""
+ Saves model predictions to `output_dir`.
+
+ A custom behavior that not contained in Seq2SeqTrainer.
+ """
+ if not self.is_world_process_zero():
+ return
+
+ output_prediction_file = os.path.join(
+ self.args.output_dir, "generated_predictions.jsonl"
+ )
+ logger.info(f"Saving prediction results to {output_prediction_file}")
+
+ preds = np.where(
+ predict_results.predictions != IGNORE_INDEX,
+ predict_results.predictions,
+ self.tokenizer.pad_token_id,
+ )
+ labels = np.where(
+ predict_results.label_ids != IGNORE_INDEX,
+ predict_results.label_ids,
+ self.tokenizer.pad_token_id,
+ )
+
+ decoded_preds = self.tokenizer.batch_decode(
+ preds, skip_special_tokens=True, clean_up_tokenization_spaces=True
+ )
+ decoded_labels = self.tokenizer.batch_decode(
+ labels, skip_special_tokens=True, clean_up_tokenization_spaces=True
+ )
+
+ with open(output_prediction_file, "w", encoding="utf-8") as writer:
+ res: List[str] = []
+ for pred, label in zip(decoded_preds, decoded_labels):
+ res.append(
+ json.dumps({"label": label, "predict": pred}, ensure_ascii=False)
+ )
+ writer.write("\n".join(res))
+
+
+@dataclass
+class ComputeMetrics:
+ r"""
+ Wraps the tokenizer into metric functions, used in Seq2SeqPeftTrainer.
+ """
+
+ tokenizer: "PreTrainedTokenizer"
+
+ def __call__(
+ self, eval_preds: Sequence[Union[np.ndarray, Tuple[np.ndarray]]]
+ ) -> Dict[str, float]:
+ r"""
+ Uses the model predictions to compute metrics.
+ """
+ preds, labels = eval_preds
+ score_dict = {"rouge-1": [], "rouge-2": [], "rouge-l": [], "bleu-4": []}
+
+ preds = np.where(preds != IGNORE_INDEX, preds, self.tokenizer.pad_token_id)
+ labels = np.where(labels != IGNORE_INDEX, labels, self.tokenizer.pad_token_id)
+
+ decoded_preds = self.tokenizer.batch_decode(preds, skip_special_tokens=True)
+ decoded_labels = self.tokenizer.batch_decode(labels, skip_special_tokens=True)
+
+ for pred, label in zip(decoded_preds, decoded_labels):
+ hypothesis = list(jieba.cut(pred))
+ reference = list(jieba.cut(label))
+
+ if (
+ len(" ".join(hypothesis).split()) == 0
+ or len(" ".join(reference).split()) == 0
+ ):
+ result = {
+ "rouge-1": {"f": 0.0},
+ "rouge-2": {"f": 0.0},
+ "rouge-l": {"f": 0.0},
+ }
+ else:
+ rouge = Rouge()
+ scores = rouge.get_scores(" ".join(hypothesis), " ".join(reference))
+ result = scores[0]
+
+ for k, v in result.items():
+ score_dict[k].append(round(v["f"] * 100, 4))
+
+ bleu_score = sentence_bleu(
+ [list(label)],
+ list(pred),
+ smoothing_function=SmoothingFunction().method3,
+ )
+ score_dict["bleu-4"].append(round(bleu_score * 100, 4))
+
+ return {k: float(np.mean(v)) for k, v in score_dict.items()}
+
+
+# Avoid runtime error in model.generate(do_sample=True).
+class InvalidScoreLogitsProcessor(LogitsProcessor):
+ def __call__(
+ self, input_ids: torch.LongTensor, scores: torch.FloatTensor
+ ) -> torch.FloatTensor:
+ if torch.isnan(scores).any() or torch.isinf(scores).any():
+ scores.zero_()
+ scores[..., 0] = 1.0
+ return scores
+
+
+def get_logits_processor() -> LogitsProcessorList:
+ logits_processor = LogitsProcessorList()
+ logits_processor.append(InvalidScoreLogitsProcessor())
+ return logits_processor
+
+
+# metric used
+def smooth(scalars: List[float]) -> List[float]:
+ r"""
+ EMA implementation according to TensorBoard.
+ """
+ last = scalars[0]
+ smoothed = list()
+ weight = 1.8 * (
+ 1 / (1 + math.exp(-0.05 * len(scalars))) - 0.5
+ ) # a sigmoid function
+ for next_val in scalars:
+ smoothed_val = last * weight + (1 - weight) * next_val
+ smoothed.append(smoothed_val)
+ last = smoothed_val
+ return smoothed
+
+
+def plot_loss(
+ save_dictionary: os.PathLike, keys: Optional[List[str]] = ["loss"]
+) -> None:
+ with open(
+ os.path.join(save_dictionary, TRAINER_STATE_NAME), "r", encoding="utf-8"
+ ) as f:
+ data = json.load(f)
+
+ for key in keys:
+ steps, metrics = [], []
+ for i in range(len(data["log_history"])):
+ if key in data["log_history"][i]:
+ steps.append(data["log_history"][i]["step"])
+ metrics.append(data["log_history"][i][key])
+
+ if len(metrics) == 0:
+ logger.warning(f"No metric {key} to plot.")
+ continue
+
+ plt.figure()
+ plt.plot(steps, metrics, alpha=0.4, label="original")
+ plt.plot(steps, smooth(metrics), label="smoothed")
+ plt.title("training {} of {}".format(key, save_dictionary))
+ plt.xlabel("step")
+ plt.ylabel(key)
+ plt.legend()
+ plt.savefig(
+ os.path.join(save_dictionary, "training_{}.png".format(key)),
+ format="png",
+ dpi=100,
+ )
+ print(
+ "Figure saved:",
+ os.path.join(save_dictionary, "training_{}.png".format(key)),
+ )
+
+
+def export_model(
+ args: Optional[Dict[str, Any]] = None, max_shard_size: Optional[str] = "10GB"
+):
+ model_args, _, training_args, finetuning_args, _ = get_train_args(
+ args, data_args_init=False
+ )
+ model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args)
+ model.save_pretrained(training_args.output_dir, max_shard_size=max_shard_size)
+ try:
+ tokenizer.save_pretrained(training_args.output_dir)
+ except:
+ logger.warning("Cannot save tokenizer, please copy the files manually.")
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/__init__.py
new file mode 100644
index 00000000..d9cb30ed
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/__init__.py
@@ -0,0 +1,8 @@
+"""
+dbgpt_hub.predict
+==============
+"""
+
+from .predict_api import start_predict
+
+__all__ = ["start_predict"]
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict.py
new file mode 100644
index 00000000..cdff152b
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict.py
@@ -0,0 +1,51 @@
+import json
+import os
+import sys
+
+ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+sys.path.append(ROOT_PATH)
+
+from typing import Any, Dict, List, Optional
+
+from dbgpt_hub_gql.data_process.data_utils import extract_sql_prompt_dataset
+from dbgpt_hub_gql.llm_base.chat_model import ChatModel
+from tqdm import tqdm
+
+
+def prepare_dataset(
+ predict_file_path: Optional[str] = None,
+) -> List[Dict]:
+ with open(predict_file_path, "r") as fp:
+ data = json.load(fp)
+ predict_data = [extract_sql_prompt_dataset(item) for item in data]
+ return predict_data
+
+
+def inference(model: ChatModel, predict_data: List[Dict], **input_kwargs):
+ res = []
+ # test
+ # for item in predict_data[:20]:
+ for item in tqdm(predict_data, desc="Inference Progress", unit="item"):
+ print(f"item[input] \n{item['input']}")
+ response, _ = model.chat(query=item["input"], history=[], **input_kwargs)
+ res.append(response)
+ return res
+
+
+def predict(model: ChatModel):
+ args = model.data_args
+ ## predict file can be give by param --predicted_input_filename ,output_file can be gived by param predicted_out_filename
+ predict_data = prepare_dataset(args.predicted_input_filename)
+ result = inference(model, predict_data)
+
+ with open(args.predicted_out_filename, "w") as f:
+ for p in result:
+ try:
+ f.write(p.replace("\n", " ") + "\n")
+ except:
+ f.write("Invalid Output!\n")
+
+
+if __name__ == "__main__":
+ model = ChatModel()
+ predict(model)
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict_api.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict_api.py
new file mode 100644
index 00000000..78084738
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/predict/predict_api.py
@@ -0,0 +1,32 @@
+import os
+from typing import Any, Dict, Optional
+
+from dbgpt_hub_gql.predict import predict
+
+
+def start_predict(
+ args: Optional[Dict[str, Any]] = None, cuda_visible_devices: Optional[str] = "0"
+):
+ # Setting CUDA Device
+ os.environ["CUDA_VISIBLE_DEVICES"] = cuda_visible_devices
+
+ # Default Arguments
+ if args is None:
+ args = {
+ "model_name_or_path": "codellama/CodeLlama-13b-Instruct-hf",
+ "template": "llama2",
+ "finetuning_type": "lora",
+ "checkpoint_dir": "dbgpt_hub_gql/output/adapter/CodeLlama-13b-gql-lora",
+ "predict_file_path": "dbgpt_hub_gql/data/tugraph-db-example/dev.json",
+ "predict_out_dir": "dbgpt_hub_gql/output/",
+ "predicted_out_filename": "pred_gql.txt",
+ }
+ else:
+ args = args
+
+ # Execute prediction
+ predict.predict(args)
+
+
+if __name__ == "__main__":
+ start_predict()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/export_merge.sh b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/export_merge.sh
new file mode 100644
index 00000000..3a347105
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/export_merge.sh
@@ -0,0 +1,19 @@
+# llama2 series
+python dbgpt_hub_gql/train/export_model.py \
+ --model_name_or_path codellama/CodeLlama-7b-Instruct-hf \
+ --template llama2 \
+ --finetuning_type lora \
+ --checkpoint_dir dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora \
+ --output_dir dbgpt_hub_gql/output/codellama-7b-gql-sft \
+ --fp16
+
+
+## Baichuan2
+# python dbgpt_hub_gql/train/export_model.py \
+# --model_name_or_path Your_base_model_path_like_Baichuan2-13B-Chat \
+# --template Your_template_like_baichuan2_eval \
+# --finetuning_type lora \
+# --checkpoint_dir Your_ckpt_path_checkpoint-100 \
+# --output_dir Your_export_model_like_output_merge_model_baichuan2-13b-qlora_merge \
+# --fp16
+# # --bf16
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/gen_train_eval_data.sh b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/gen_train_eval_data.sh
new file mode 100644
index 00000000..fbe8e1cb
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/gen_train_eval_data.sh
@@ -0,0 +1,2 @@
+# base spider dataset ,produce train and dev data
+python dbgpt_hub_gql/data_process/sql_data_process.py
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/predict_sft.sh b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/predict_sft.sh
new file mode 100644
index 00000000..7b1b2db9
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/predict_sft.sh
@@ -0,0 +1,47 @@
+## shijian llama2 test
+
+current_date=$(date +"%Y%m%d_%H%M")
+pred_log="dbgpt_hub_gql/output/logs/pred_test_${current_date}.log"
+start_time=$(date +%s)
+echo " Pred Start time: $(date -d @$start_time +'%Y-%m-%d %H:%M:%S')" >>${pred_log}
+
+# CUDA_VISIBLE_DEVICES=0,1 python dbgpt_hub_gql/predict/predict.py \
+# --model_name_or_path codellama/CodeLlama-7b-Instruct-hf \
+# --template llama2 \
+# --finetuning_type lora \
+# --predicted_input_filename dbgpt_hub_gql/data/tugraph-analytics/dev.json \
+# --checkpoint_dir dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora \
+# --predicted_out_filename dbgpt_hub_gql/output/pred/tugraph_analytics_dev.txt >> ${pred_log}
+
+CUDA_VISIBLE_DEVICES=0,1 python dbgpt_hub_gql/predict/predict.py \
+ --model_name_or_path codellama/CodeLlama-7b-Instruct-hf \
+ --template llama2 \
+ --finetuning_type lora \
+ --predicted_input_filename dbgpt_hub_gql/data/tugraph-db-example/dev.json \
+ --checkpoint_dir dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora \
+ --predicted_out_filename dbgpt_hub_gql/output/pred/tugraph_db_example_dev.txt >> ${pred_log}
+
+echo "############pred end###############" >>${pred_log}
+echo "pred End time: $(date)" >>${pred_log}
+end_time=$(date +%s)
+duration=$((end_time - start_time))
+hours=$((duration / 3600))
+min=$(( (duration % 3600) / 60))
+echo "Time elapsed: ${hour} hour $min min " >>${pred_log}
+
+
+# # wangzai baichua2_eval test
+# CUDA_VISIBLE_DEVICES=0 python dbgpt_hub_gql/predict/predict.py \
+# --model_name_or_path /home/model/Baichuan2-13B-Chat \
+# --template baichuan2_eval \
+# --quantization_bit 4 \
+# --finetuning_type lora \
+# --checkpoint_dir dbgpt_hub_gql/output/adapter/baichuan2-13b-qlora
+
+
+## wangzai codellama2_pred test a100
+# CUDA_VISIBLE_DEVICES=0,1 python dbgpt_hub_gql/predict/predict.py \
+# --model_name_or_path /home/model_files/codellama/CodeLlama-7b-Instruct-hf \
+# --template llama2 \
+# --finetuning_type lora \
+# --checkpoint_dir dbgpt_hub_gql/output/adapter/code_llama_7b-qlora
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_rm.sh b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_rm.sh
new file mode 100644
index 00000000..3a7e055e
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_rm.sh
@@ -0,0 +1,43 @@
+wandb offline # Close wandb
+# a100 ,单卡
+current_date=$(date +"%Y%m%d_%H%M")
+train_log="dbgpt_hub_gql/output/logs/train_sft_test_${current_date}.log"
+start_time=$(date +%s)
+echo " Train Start time: $(date -d @$start_time +'%Y-%m-%d %H:%M:%S')" >>${train_log}
+
+# the default param set could be run in a server with one a100(40G) gpu, if your server not support the set,you can set smaller param such as lora_rank and use qlora with quant 4 eg...
+deepspeed --num_gpus 4 dbgpt_hub_gql/train/rm_train.py \
+ --deepspeed dbgpt_hub_gql/configs/ds_config.json \
+ --stage rm \
+ --model_name_or_path /home/CPF/LLM/qwen-7b-chat \
+ --do_train \
+ --dataset example_rm_train \
+ --max_source_length 1024 \
+ --max_target_length 512 \
+ --finetuning_type lora \
+ --lora_target c_attn \
+ --template chatml \
+ --lora_rank 64 \
+ --lora_alpha 32 \
+ --output_dir dbgpt_hub_gql/output/adapter/qwen-7b-rm-test \
+ --overwrite_cache \
+ --overwrite_output_dir \
+ --per_device_train_batch_size 1 \
+ --gradient_accumulation_steps 2 \
+ --lr_scheduler_type cosine_with_restarts \
+ --logging_steps 50 \
+ --save_steps 2000 \
+ --learning_rate 1e-6 \
+ --num_train_epochs 0.05 \
+ --plot_loss True \
+ --quantization_bit 4 >> ${train_log}
+ # --bf16
+ # --bf16#v100不支持bf16
+
+echo "############train end###############" >>${train_log}
+echo "Train End time: $(date)" >>${train_log}
+end_time=$(date +%s)
+duration=$((end_time - start_time))
+hours=$((duration / 3600))
+min=$(( (duration % 3600) / 60))
+echo "Time elapsed: ${hour} hour $min min " >>${train_log}
\ No newline at end of file
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_sft.sh b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_sft.sh
new file mode 100644
index 00000000..7465c382
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/scripts/train_sft.sh
@@ -0,0 +1,127 @@
+wandb offline # Close wandb
+# a100 ,单卡
+current_date=$(date +"%Y%m%d_%H%M")
+train_log="dbgpt_hub_gql/output/logs/train_sft_test_${current_date}.log"
+start_time=$(date +%s)
+echo " Train Start time: $(date -d @$start_time +'%Y-%m-%d %H:%M:%S')" >>${train_log}
+
+# default train , zero-shot,
+num_shot=0
+
+# one-shot train
+# num_shot=1
+
+# dataset="tugraph_analytics_train"
+# dataset="tugraph_db_train"
+dataset="tugraph_db_example_train"
+model_name_or_path=${model_name_or_path-"codellama/CodeLlama-7b-Instruct-hf"}
+output_dir="dbgpt_hub_gql/output/adapter/CodeLlama-7b-gql-lora"
+
+# the default param set could be run in a server with one V100(16G) gpu
+CUDA_VISIBLE_DEVICES=0 python dbgpt_hub_gql/train/sft_train.py \
+ --model_name_or_path $model_name_or_path \
+ --do_train \
+ --dataset $dataset \
+ --max_source_length 1024 \
+ --max_target_length 512 \
+ --finetuning_type lora \
+ --lora_target q_proj,v_proj \
+ --template llama2 \
+ --lora_rank 64 \
+ --lora_alpha 32 \
+ --output_dir $output_dir \
+ --overwrite_cache \
+ --overwrite_output_dir \
+ --per_device_train_batch_size 1 \
+ --gradient_accumulation_steps 16 \
+ --lr_scheduler_type cosine_with_restarts \
+ --logging_steps 50 \
+ --save_steps 2000 \
+ --learning_rate 2e-4 \
+ --num_train_epochs 8 \
+ --plot_loss \
+ --fp16 >> ${train_log}
+
+echo "############train end###############" >>${train_log}
+echo "Train End time: $(date)" >>${train_log}
+end_time=$(date +%s)
+duration=$((end_time - start_time))
+hours=$((duration / 3600))
+min=$(( (duration % 3600) / 60))
+echo "Time elapsed: ${hour} hour $min min " >>${train_log}
+
+# the default param set could be run in a server with one a100(40G) gpu, if your server not support the set,you can set smaller param such as lora_rank and use qlora with quant 4 eg...
+# CUDA_VISIBLE_DEVICES=0 python dbgpt_hub_gql/train/sft_train.py \
+# --model_name_or_path $model_name_or_path \
+# --do_train \
+# --dataset $dataset \
+# --max_source_length 2048 \
+# --max_target_length 512 \
+# --finetuning_type lora \
+# --lora_target q_proj,v_proj \
+# --template llama2 \
+# --lora_rank 64 \
+# --lora_alpha 32 \
+# --output_dir $output_dir \
+# --overwrite_cache \
+# --overwrite_output_dir \
+# --per_device_train_batch_size 1 \
+# --gradient_accumulation_steps 16 \
+# --lr_scheduler_type cosine_with_restarts \
+# --logging_steps 50 \
+# --save_steps 2000 \
+# --learning_rate 2e-4 \
+# --num_train_epochs 8 \
+# --plot_loss \
+# --bf16 >> ${train_log}
+# # --bf16#v100不支持bf16
+
+# 多卡,deepseed启动,A100
+# deepspeed --num_gpus 2 dbgpt_hub_gql/train/sft_train.py \
+# --deepspeed dbgpt_hub_gql/configs/stage2.json \
+# --quantization_bit 4 \
+# --model_name_or_path /home/model_files/Llama-2-13b-chat-hf \
+# --do_train \
+# --dataset example_text2sql_train \
+# --max_source_length 1024 \
+# --max_target_length 512 \
+# --template llama2 \
+# --finetuning_type lora \
+# --lora_rank 64 \
+# --lora_alpha 32 \
+# --lora_target q_proj,v_proj \
+# --output_dir dbgpt_hub_gql/output/adapter/llama2-13b-qlora_1024_epoch1_debug1008_withDeepseed_mulitCard \
+# --overwrite_cache \
+# --overwrite_output_dir \
+# --per_device_train_batch_size 1 \
+# --gradient_accumulation_steps 16 \
+# --lr_scheduler_type cosine_with_restarts \
+# --logging_steps 25 \
+# --save_steps 20 \
+# --learning_rate 2e-4 \
+# --num_train_epochs 0.1 \
+# --plot_loss \
+# --bf16 2>&1 | tee ${train_log}
+
+
+# 多卡,deepseed,全量微调
+# deepspeed --include localhost:4,5,6,7 dbgpt_hub_gql/train/sft_train.py \
+# --dataset example_text2sql_train \
+# --model_name_or_path CodeLlama-7b-Instruct-hf \
+# --do_train \
+# --finetuning_type full \
+# --max_source_length 2048 \
+# --max_target_length 512 \
+# --template llama2 \
+# --output_dir dbgpt_hub_gql/output/adapter/code-llama-7b-2048_epoch4_full \
+# --overwrite_cache \
+# --overwrite_output_dir \
+# --per_device_train_batch_size 4 \
+# --gradient_accumulation_steps 16 \
+# --lr_scheduler_type cosine_with_restarts \
+# --logging_steps 50 \
+# --learning_rate 2e-5 \
+# --num_train_epochs 4 \
+# --plot_loss \
+# --bf16 True\
+# --deepspeed dbgpt_hub_gql/configs/stage3.json 2>&1 | tee ${train_log}
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/train/__init__.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/__init__.py
new file mode 100644
index 00000000..4b578153
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/__init__.py
@@ -0,0 +1,8 @@
+"""
+dbgpt_hub.train
+==============
+"""
+
+from .sft_train_api import start_sft
+
+__all__ = ["start_sft"]
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/train/export_model.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/export_model.py
new file mode 100644
index 00000000..707a8855
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/export_model.py
@@ -0,0 +1,14 @@
+import os
+import sys
+
+ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+sys.path.append(ROOT_PATH)
+from dbgpt_hub_gql.llm_base.model_trainer import export_model
+
+
+def main():
+ export_model()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/train/rm_train.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/rm_train.py
new file mode 100644
index 00000000..313c85fe
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/rm_train.py
@@ -0,0 +1,318 @@
+import json
+import os
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Tuple, Union
+
+import numpy as np
+import torch
+from dbgpt_hub_gql.data_process.data_utils import (
+ get_dataset,
+ preprocess_dataset,
+ split_dataset,
+)
+from dbgpt_hub_gql.llm_base.config_parser import get_train_args
+from dbgpt_hub_gql.llm_base.load_tokenizer import load_model_and_tokenizer
+from dbgpt_hub_gql.llm_base.loggings import LogCallback, get_logger
+from dbgpt_hub_gql.llm_base.model_trainer import plot_loss
+from transformers import (
+ DataCollatorWithPadding,
+ Seq2SeqTrainingArguments,
+ Trainer,
+ TrainerCallback,
+ TrainerControl,
+ TrainerState,
+ TrainingArguments,
+)
+from transformers.modeling_utils import custom_object_save, unwrap_model
+from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR, has_length
+
+if TYPE_CHECKING:
+ from dbgpt_hub_gql.configs.data_args import DataArguments
+ from dbgpt_hub_gql.configs.model_args import (
+ FinetuningArguments,
+ GeneratingArguments,
+ ModelArguments,
+ )
+ from transformers.modeling_utils import PreTrainedModel
+ from transformers.trainer import PredictionOutput
+ from trl import AutoModelForCausalLMWithValueHead
+
+logger = get_logger(__name__)
+
+
+@dataclass
+class PairwiseDataCollatorWithPadding(DataCollatorWithPadding):
+ r"""
+ Data collator for pairwise data.
+ """
+
+ def __call__(self, features: Sequence[Dict[str, Any]]) -> Dict[str, torch.Tensor]:
+ r"""
+ Pads batched data to the longest sequence in the batch.
+
+ We generate 2 * n examples where the first n examples represent chosen examples and
+ the last n examples represent rejected examples.
+ """
+ features = [
+ {
+ "input_ids": feature["prompt_ids"] + feature[key],
+ "attention_mask": [1]
+ * (len(feature["prompt_ids"]) + len(feature[key])),
+ }
+ for key in ("chosen_ids", "rejected_ids")
+ for feature in features
+ ]
+ return super().__call__(features)
+
+
+class PairwiseTrainer(Trainer):
+ r"""
+ Inherits PeftTrainer to compute pairwise loss.
+ """
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.can_return_loss = True # override property to return eval_loss
+
+ def compute_loss(
+ self,
+ model: "PreTrainedModel",
+ inputs: Dict[str, torch.Tensor],
+ return_outputs: Optional[bool] = False,
+ ) -> Union[torch.Tensor, Tuple[torch.Tensor, List[torch.Tensor]]]:
+ r"""
+ Computes pairwise loss. The first n examples are chosen and the last n examples are rejected.
+
+ Subclass and override to inject custom behavior.
+
+ Note that the first element will be removed from the output tuple.
+ See: https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/trainer.py#L3509
+ """
+ # Compute rewards
+ _, _, values = model(**inputs, output_hidden_states=True, return_dict=True)
+
+ unwrapped_model: "PreTrainedModel" = self.accelerator.unwrap_model(self.model)
+ if getattr(unwrapped_model.config, "model_type", None) == "chatglm":
+ values = torch.transpose(values, 0, 1)
+
+ # Split the inputs and rewards into two parts, chosen and rejected
+ batch_size = inputs["input_ids"].size(0) // 2
+ chosen_input_ids, rejected_input_ids = (
+ inputs["input_ids"][:batch_size],
+ inputs["input_ids"][batch_size:],
+ )
+ chosen_rewards, rejected_rewards = values[:batch_size], values[batch_size:]
+ chosen_scores, rejected_scores = [], []
+
+ # Compute pairwise loss. Only backprop on the different tokens before padding
+ # Inspired by: https://github.com/CarperAI/trlx/blob/main/examples/summarize_rlhf/reward_model/reward_model.py
+ loss = 0
+ for i in range(batch_size):
+ chosen_length = (
+ chosen_input_ids[i] != self.tokenizer.pad_token_id
+ ).nonzero()[-1] + 1
+ rejected_length = (
+ rejected_input_ids[i] != self.tokenizer.pad_token_id
+ ).nonzero()[-1] + 1
+ check_divergence = (chosen_input_ids[i] != rejected_input_ids[i]).nonzero()
+
+ if len(check_divergence) == 0:
+ end_index = chosen_length
+ div_index = end_index - 1
+ else:
+ end_index = max(chosen_length, rejected_length)
+ div_index = check_divergence[0]
+
+ assert div_index > 0
+ chosen_trunc_rewards = chosen_rewards[i, div_index:end_index]
+ rejected_trunc_rewards = rejected_rewards[i, div_index:end_index]
+ if (
+ return_outputs
+ ): # use the score on the last token except pad token for inference
+ chosen_scores.append(chosen_rewards[i, chosen_length - 1])
+ rejected_scores.append(rejected_rewards[i, rejected_length - 1])
+ loss += -torch.nn.functional.logsigmoid(
+ chosen_trunc_rewards - rejected_trunc_rewards
+ ).mean()
+
+ loss = loss / batch_size
+ if return_outputs:
+ chosen_scores, rejected_scores = torch.stack(chosen_scores), torch.stack(
+ rejected_scores
+ )
+ return loss, [loss, chosen_scores, rejected_scores]
+
+ return loss
+
+ def save_predictions(self, predict_results: "PredictionOutput") -> None:
+ r"""
+ Saves model predictions to `output_dir`.
+
+ A custom behavior that not contained in Seq2SeqTrainer.
+ """
+ if not self.is_world_process_zero():
+ return
+
+ output_prediction_file = os.path.join(
+ self.args.output_dir, "generated_predictions.jsonl"
+ )
+ logger.info(f"Saving prediction results to {output_prediction_file}")
+ chosen_scores, rejected_scores = predict_results.predictions
+
+ with open(output_prediction_file, "w", encoding="utf-8") as writer:
+ res: List[str] = []
+ for c_score, r_score in zip(chosen_scores, rejected_scores):
+ res.append(
+ json.dumps(
+ {
+ "chosen": round(float(c_score), 2),
+ "rejected": round(float(r_score), 2),
+ }
+ )
+ )
+ writer.write("\n".join(res))
+
+
+class SavePeftModelCallback(TrainerCallback):
+ def _save_model_with_valuehead(
+ self, model: "AutoModelForCausalLMWithValueHead", output_dir: str
+ ) -> None:
+ model.pretrained_model.config.save_pretrained(output_dir)
+ if model.pretrained_model.can_generate():
+ model.pretrained_model.generation_config.save_pretrained(output_dir)
+ if getattr(model, "is_peft_model", False):
+ model.pretrained_model.save_pretrained(output_dir)
+ elif getattr(
+ model.pretrained_model, "_auto_class", None
+ ): # must not a peft model
+ custom_object_save(
+ model.pretrained_model, output_dir, config=model.pretrained_model.config
+ )
+
+ def on_save(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs,
+ ):
+ r"""
+ Event called after a checkpoint save.
+ """
+ if args.should_save:
+ self._save_model_with_valuehead(
+ model=unwrap_model(kwargs.pop("model")),
+ output_dir=os.path.join(
+ args.output_dir,
+ "{}-{}".format(PREFIX_CHECKPOINT_DIR, state.global_step),
+ ),
+ )
+
+ def on_train_end(
+ self,
+ args: "TrainingArguments",
+ state: "TrainerState",
+ control: "TrainerControl",
+ **kwargs,
+ ):
+ r"""
+ Event called at the end of training.
+ """
+
+ if args.should_save:
+ self._save_model_with_valuehead(
+ model=unwrap_model(kwargs.pop("model")), output_dir=args.output_dir
+ )
+
+
+def compute_accuracy(
+ eval_preds: Sequence[Union[np.ndarray, Tuple[np.ndarray]]]
+) -> Dict[str, float]:
+ preds, _ = eval_preds
+ return {"accuracy": (preds[0] > preds[1]).sum() / len(preds[0])}
+
+
+def run_rm(
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ finetuning_args: "FinetuningArguments",
+ callbacks: Optional[List["TrainerCallback"]] = None,
+):
+ dataset = get_dataset(model_args, data_args)
+ model, tokenizer = load_model_and_tokenizer(
+ model_args, finetuning_args, training_args.do_train, add_valuehead=True
+ )
+ dataset = preprocess_dataset(
+ dataset, tokenizer, data_args, training_args, stage="rm"
+ )
+ data_collator = PairwiseDataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)
+
+ # Update arguments
+ training_args_dict = training_args.to_dict()
+ training_args_dict.update(
+ dict(remove_unused_columns=False)
+ ) # important for pairwise dataset
+ training_args = Seq2SeqTrainingArguments(**training_args_dict)
+
+ # Initialize our Trainer
+ trainer = PairwiseTrainer(
+ model=model,
+ args=training_args,
+ tokenizer=tokenizer,
+ data_collator=data_collator,
+ callbacks=callbacks + [SavePeftModelCallback()],
+ compute_metrics=compute_accuracy,
+ **split_dataset(dataset, data_args, training_args),
+ )
+
+ # Training
+ if training_args.do_train:
+ train_result = trainer.train(
+ resume_from_checkpoint=training_args.resume_from_checkpoint
+ )
+ trainer.save_model()
+ trainer.log_metrics("train", train_result.metrics)
+ trainer.save_metrics("train", train_result.metrics)
+ trainer.save_state()
+ if trainer.is_world_process_zero() and model_args.plot_loss:
+ plot_loss(training_args.output_dir, keys=["loss", "eval_loss"])
+
+ # Evaluation
+ if training_args.do_eval:
+ metrics = trainer.evaluate(metric_key_prefix="eval")
+ trainer.log_metrics("eval", metrics)
+ trainer.save_metrics("eval", metrics)
+
+ # Predict
+ if training_args.do_predict:
+ predict_results = trainer.predict(dataset, metric_key_prefix="predict")
+ trainer.log_metrics("predict", predict_results.metrics)
+ trainer.save_metrics("predict", predict_results.metrics)
+ trainer.save_predictions(predict_results)
+
+
+def train(
+ args: Optional[Dict[str, Any]] = None,
+ callbacks: Optional[List["TrainerCallback"]] = None,
+):
+ (
+ model_args,
+ data_args,
+ training_args,
+ finetuning_args,
+ generating_args,
+ ) = get_train_args(args)
+ callbacks = [LogCallback()] if callbacks is None else callbacks
+
+ run_rm(
+ model_args,
+ data_args,
+ training_args,
+ finetuning_args,
+ callbacks,
+ )
+
+
+if __name__ == "__main__":
+ train()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train.py
new file mode 100644
index 00000000..c6924e07
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train.py
@@ -0,0 +1,164 @@
+import os
+import sys
+
+ROOT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+sys.path.append(ROOT_PATH)
+from typing import TYPE_CHECKING, Any, Dict, List, Optional
+
+from dbgpt_hub_gql.configs.config import IGNORE_INDEX
+from dbgpt_hub_gql.data_process.data_utils import (
+ get_dataset,
+ preprocess_dataset,
+ split_dataset,
+)
+from dbgpt_hub_gql.llm_base.config_parser import get_train_args
+from dbgpt_hub_gql.llm_base.load_tokenizer import load_model_and_tokenizer
+from dbgpt_hub_gql.llm_base.loggings import LogCallback, get_logger
+from dbgpt_hub_gql.llm_base.model_trainer import (
+ ComputeMetrics,
+ Seq2SeqPeftTrainer,
+ get_logits_processor,
+ plot_loss,
+)
+from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainingArguments
+
+if TYPE_CHECKING:
+ from dbgpt_hub_gql.configs import (
+ DataArguments,
+ FinetuningArguments,
+ GeneratingArguments,
+ ModelArguments,
+ )
+ from transformers import TrainerCallback
+
+
+logger = get_logger(__name__)
+
+
+def run_sft(
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ callbacks: Optional[List["TrainerCallback"]] = None,
+):
+ dataset = get_dataset(model_args, data_args)
+ model, tokenizer = load_model_and_tokenizer(
+ model_args, finetuning_args, training_args.do_train
+ )
+ dataset = preprocess_dataset(dataset, tokenizer, data_args, training_args, "sft")
+ data_collator = DataCollatorForSeq2Seq(
+ tokenizer=tokenizer,
+ label_pad_token_id=IGNORE_INDEX
+ if data_args.ignore_pad_token_for_loss
+ else tokenizer.pad_token_id,
+ )
+
+ # Override the decoding parameters of Seq2SeqTrainer
+ training_args_dict = training_args.to_dict()
+ training_args_dict.update(
+ dict(
+ generation_max_length=training_args.generation_max_length
+ or data_args.max_target_length,
+ generation_num_beams=data_args.eval_num_beams
+ or training_args.generation_num_beams,
+ )
+ )
+ training_args = Seq2SeqTrainingArguments(**training_args_dict)
+
+ # Initialize our Trainer
+ trainer = Seq2SeqPeftTrainer(
+ finetuning_args=finetuning_args,
+ model=model,
+ args=training_args,
+ tokenizer=tokenizer,
+ data_collator=data_collator,
+ callbacks=callbacks,
+ compute_metrics=ComputeMetrics(tokenizer)
+ if training_args.predict_with_generate
+ else None,
+ **split_dataset(dataset, data_args, training_args)
+ )
+
+ # Keyword arguments for `model.generate`
+ gen_kwargs = generating_args.to_dict()
+ gen_kwargs["eos_token_id"] = list(
+ set([tokenizer.eos_token_id] + tokenizer.additional_special_tokens_ids)
+ )
+ gen_kwargs["pad_token_id"] = tokenizer.pad_token_id
+ gen_kwargs["logits_processor"] = get_logits_processor()
+
+ # Training
+ if training_args.do_train:
+ train_result = trainer.train(
+ resume_from_checkpoint=training_args.resume_from_checkpoint
+ )
+ trainer.log_metrics("train", train_result.metrics)
+ trainer.save_metrics("train", train_result.metrics)
+ trainer.save_state()
+ trainer.save_model()
+ if trainer.is_world_process_zero() and model_args.plot_loss:
+ plot_loss(training_args.output_dir, keys=["loss", "eval_loss"])
+
+ # Evaluation
+ if training_args.do_eval:
+ metrics = trainer.evaluate(metric_key_prefix="eval", **gen_kwargs)
+ if (
+ training_args.predict_with_generate
+ ): # eval_loss will be wrong if predict_with_generate is enabled
+ metrics.pop("eval_loss", None)
+ trainer.log_metrics("eval", metrics)
+ trainer.save_metrics("eval", metrics)
+
+ # Predict
+ if training_args.do_predict:
+ predict_results = trainer.predict(
+ dataset, metric_key_prefix="predict", **gen_kwargs
+ )
+ if (
+ training_args.predict_with_generate
+ ): # predict_loss will be wrong if predict_with_generate is enabled
+ predict_results.metrics.pop("predict_loss", None)
+ trainer.log_metrics("predict", predict_results.metrics)
+ trainer.save_metrics("predict", predict_results.metrics)
+ trainer.save_predictions(predict_results)
+
+
+def train(
+ args: Optional[Dict[str, Any]] = None,
+ callbacks: Optional[List["TrainerCallback"]] = None,
+):
+ (
+ model_args,
+ data_args,
+ training_args,
+ finetuning_args,
+ generating_args,
+ ) = get_train_args(args)
+ callbacks = [LogCallback()] if callbacks is None else callbacks
+
+ run_sft(
+ model_args,
+ data_args,
+ training_args,
+ finetuning_args,
+ generating_args,
+ callbacks,
+ )
+
+
+def export_model(
+ args: Optional[Dict[str, Any]] = None, max_shard_size: Optional[str] = "10GB"
+):
+ model_args, _, training_args, finetuning_args, _ = get_train_args(args)
+ model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args)
+ model.save_pretrained(training_args.output_dir, max_shard_size=max_shard_size)
+ try:
+ tokenizer.save_pretrained(training_args.output_dir)
+ except:
+ logger.warning("Cannot save tokenizer, please copy the files manually.")
+
+
+if __name__ == "__main__":
+ train()
diff --git a/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train_api.py b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train_api.py
new file mode 100644
index 00000000..8e794665
--- /dev/null
+++ b/src/dbgpt-hub-gql/dbgpt_hub_gql/train/sft_train_api.py
@@ -0,0 +1,47 @@
+import os
+from typing import Any, Dict, Optional
+
+from dbgpt_hub_gql.train import sft_train
+
+
+def start_sft(
+ args: Optional[Dict[str, Any]] = None, cuda_visible_devices: Optional[str] = "0"
+):
+ # Setting CUDA Device
+ os.environ["CUDA_VISIBLE_DEVICES"] = cuda_visible_devices
+
+ # Default Arguments
+ if args is None:
+ args = {
+ "model_name_or_path": "codellama/CodeLlama-13b-Instruct-hf",
+ "do_train": True,
+ "dataset": "tugraph_db_train",
+ "max_source_length": 2048,
+ "max_target_length": 512,
+ "finetuning_type": "lora",
+ "lora_target": "q_proj,v_proj",
+ "template": "llama2",
+ "lora_rank": 64,
+ "lora_alpha": 32,
+ "output_dir": "dbgpt_hub_gql/output/adapter/CodeLlama-13b-gql-lora",
+ "overwrite_cache": True,
+ "overwrite_output_dir": True,
+ "per_device_train_batch_size": 1,
+ "gradient_accumulation_steps": 16,
+ "lr_scheduler_type": "cosine_with_restarts",
+ "logging_steps": 50,
+ "save_steps": 2000,
+ "learning_rate": 2e-4,
+ "num_train_epochs": 8,
+ "plot_loss": True,
+ "bf16": True,
+ }
+ else:
+ args = args
+
+ # Run SFT
+ sft_train.train(args)
+
+
+if __name__ == "__main__":
+ start_sft()
diff --git a/src/dbgpt-hub-gql/setup.py b/src/dbgpt-hub-gql/setup.py
new file mode 100644
index 00000000..4034de6e
--- /dev/null
+++ b/src/dbgpt-hub-gql/setup.py
@@ -0,0 +1,100 @@
+from typing import List
+
+from setuptools import find_packages, setup
+
+
+class SetupSpec:
+ def __init__(self) -> None:
+ self.extras: dict = {}
+ self.install_requires: List[str] = []
+
+ @property
+ def unique_extras(self) -> dict[str, list[str]]:
+ unique_extras = {}
+ for k, v in self.extras.items():
+ unique_extras[k] = list(set(v))
+ return unique_extras
+
+
+setup_spec = SetupSpec()
+
+
+def core_dependencies():
+ setup_spec.extras["core"] = [
+ "transformers>=4.41.2",
+ "datasets>=2.14.6",
+ "tiktoken>=0.7.0",
+ "torch>=2.0.0",
+ "peft>=0.4.0",
+ "trl>=0.5.0",
+ "prettytable",
+ "func-timeout",
+ "sqlparse",
+ "jsonlines",
+ "rouge-chinese>=1.0.3",
+ "jieba>=0.42.1",
+ "nltk>=3.8.1",
+ "matplotlib>=3.8.1",
+ "bitsandbytes==0.41.3.post2",
+ "accelerate",
+ "sentencepiece==0.1.99",
+ "einops==0.6.1",
+ "evaluate==0.4.0",
+ "scikit-learn==1.2.2",
+ "wandb==0.15.3",
+ "pydantic==1.10.11",
+ "gradio>=3.36.0",
+ "uvicorn-http2>=0.0.0",
+ "uvicorn>=0.24.0.post1",
+ "fastapi==0.95.1",
+ "transformers-stream-generator>=0.0.4",
+ "sse-starlette>=1.6.5",
+ "rapidfuzz>=3.5.2",
+ "scipy>=1.11.3",
+ "nltk>=3.8.1",
+ "pymysql>=1.1.0",
+ "pyyaml==6.0.1",
+ "black>=23.11.0",
+ "pyright>=1.1.335",
+ "pylint>=3.0.2",
+ "markupsafe==2.1.3",
+ "nvidia-cuda-nvrtc-cu12==12.1.105",
+ "nvidia-cuda-runtime-cu12==12.1.105",
+ "nvidia-cuda-cupti-cu12==12.1.105",
+ "nvidia-cudnn-cu12==8.9.2.26",
+ "nvidia-cublas-cu12==12.1.3.1",
+ "nvidia-cufft-cu12==11.0.2.54",
+ "nvidia-curand-cu12==10.3.2.106",
+ "nvidia-cusolver-cu12==11.4.5.107",
+ "nvidia-cusparse-cu12==12.1.0.106",
+ "nvidia-nccl-cu12==2.18.1",
+ "nvidia-nvtx-cu12==12.1.105",
+ "triton==2.1.0",
+ "nvidia-nvjitlink-cu12>=12.3.52",
+ "docopt>=0.6.2",
+ "openai>=1.6.1",
+ "jaro-winkler==2.0.3",
+ "antlr4-python3-runtime==4.13.2",
+ "JPype1==1.5.0",
+ ]
+
+
+def init_install_requires():
+ setup_spec.install_requires += setup_spec.extras["core"]
+ print(f"Install requires: \n{','.join(setup_spec.install_requires)}")
+
+
+core_dependencies()
+init_install_requires()
+
+excluded_packages = ["tests", "*.tests", "*.tests.*", "examples"]
+
+setup(
+ name="dbgpt-hub-gql",
+ version="0.3.1",
+ description="DB-GPT-Hub: Text-to-GQL parsing with LLMs",
+ packages=find_packages(exclude=excluded_packages),
+ install_requires=setup_spec.install_requires,
+ extras_require=setup_spec.unique_extras,
+ python_requires=">=3.10",
+)