-
Notifications
You must be signed in to change notification settings - Fork 582
java.lang.IllegalArgumentException: boolean(67) should between 0 and 1 inclusive of both values #1143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @fabiencelier, sorry for the inconvenience at your end. It looks like deserialization error. Would you mind to share the table structure? You may also try nightly build to see if the issue has been addressed or not. |
The query looks like this: SELECT
T1.field0 AS `field_0`,
T1.field1 AS `field_0`,
...
T21.field_n AS `field_146`, -- we have 146 fields
SUM(Tk.field_1) as `SUM_1`,
...
SUM(Tk.field_5) as `SUM_5`, -- we have 5 sums
COUNT(*) as `count`,
FROM
`my_database`.`table-1` AS T1
LEFT OUTER JOIN `my_database`.`table_2` AS T2 ON (T1.key_field = T2.key_field)
... -- we have 21 joined tables
GROUP BY
T1.field0
T1.field1
... -- all the fields in the select This is a big query but I don't reproduce the issue on smaller queries so far. I have tried to split it into 2 queries with something like I don't think I can easily share the structure of the 20 tables / 140 fields |
To share the table structure you can use below query and share structure of the drop table if exists temp_table;
create table temp_table engine=Memory as
select * from ( <your query> ) where 0; Apart from that, could you use nightly build and see if your query works or not? Apart from deserialization issue on client side(e.g. JDBC driver issue?), it may also relate to server(e.g. server killed the query exauhsting resource limit, like #976). |
I'm finding a similar deserialization issue, that may be related to this. I did not find more info about it, so sorry guys if this has been discussed before. The issue arises when dealing with set flatten_nested = 0; -- This is needed for my use case
create table issue (
a_tuple Tuple(
field_a Nullable(String),
field_b Nullable(String)
),
a_string String,
a_nested Nested(
field_a Nullable(String),
field_b Nullable(String)
)
) engine = MergeTree() order by tuple(); Then, I proceed to insert a single row: insert into issue
values
(
('a_value', 'another_value'),
'another_one',
[
('a_value', 'another_value')
]
); Now, if I proceed to issue this statement:
The query returns appropriately. However, if I change the order of the elements:
Then the following exception is thrown:
It happens when we have a Nested field followed by a Tuple in the query. |
Hi @fabiencelier seems max_execution_time is too small for huge query results. the buffer string in ClickHouseInputStream is 'Code: 159, e.displayText() = DB::Exception: Timeout exceeded: ...' thus boolean(67) represent ASCII 'C' in 'Code: 159' @zhicwu so maybe we should check query result is success before deserializing we can reproduce by settings max_execution_time to a small value eg. 1 |
It might have been related to the query failing because of timeout, we solve the issue by giving more resources to our ClickHouse |
Unfortunately ClickHouse server does not support HTTP trailer, meaning there's only one response code for the client. The error handling on server side should be improved too, as sometimes it just kill the connection without any error in response. Understood the de-serialization error is confusing, but in most cases, it's a sign of server exception, and in general, we should look into system.query_log or server log to investigate the root cause. Let me add more information like query_id, session_id and server display name into the error message so that it's easier to troubleshoot. |
This issue has been automatically marked as stale because it has not had activity in the last year. It will be closed in 30 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions! |
Hello,
We have a issue similar to #902 with the latest version of ClickHouse client
0.3.2-patch11
:This happens when reading a large query result, unfortunately we can't reproduce with a small dataset example (when splitting the query in 2 parts, both parts can be read successfully).
By debugging I found that it can happen on different fields, not one in particular.
For instance one of the field which causes this issue is of type Nullable String which contains only 5 dictinct values, all non null.
Is this a known issue ? Is there anything we can do to avoid this bug or any information we can give you to help fix it ?
The text was updated successfully, but these errors were encountered: