You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there is no mechanism to verify that the content of the results__ tables aligns with the content of the tables converted to the scamper1 format. A basic sanity check could involve comparing the number of distinct destination addresses.
While executing the upload-tables.sh script, it should be possible to extract the number of affected rows (i.e., the count of distinct destination addresses) from the logs. For example, the logs contain an entry like: Number of affected rows: 3218865.
This number should match the result of the following query:
WITH
groupUniqArray((round, probe_ttl, reply_src_addr)) AS traceroute,
arrayMap(x -> x.2, traceroute) AS ttls,
arrayMap(x -> (x.1, x.3), traceroute) AS val,
CAST((ttls, val), 'Map(UInt8, Tuple(UInt8, IPv6))') AS map,
arrayMin(ttls) AS first_ttl,
arrayMax(ttls) AS last_ttl,
arrayMap(i -> (toUInt8(i), toUInt8(i + 1), map[toUInt8(i)], map[toUInt8(i + 1)]), range(first_ttl, last_ttl)) AS links,
arrayJoin(links) AS link
SELECT COUNT(DISTINCT probe_dst_addr)
FROM (
SELECT
probe_protocol,
probe_src_addr,
probe_dst_prefix,
probe_dst_addr,
probe_src_port,
probe_dst_port,
link.1 AS near_ttl,
link.2 AS far_ttl,
link.3.2 AS near_addr,
link.4.2 AS far_addr
FROM **results__ table**
GROUP BY
probe_protocol,
probe_src_addr,
probe_dst_prefix,
probe_dst_addr,
probe_src_port,
probe_dst_port
)
WHERE toString(near_addr) != '::' AND toString(far_addr) != '::'
Example of query result: {"uniqExact(probe_dst_addr)":3218865}
The text was updated successfully, but these errors were encountered:
After applying the cap to flowid, the query needs to be slightly changed:
WITH
groupUniqArray((round, probe_ttl, reply_src_addr)) AS traceroute,
arrayMap(x -> x.2, traceroute) AS ttls,
arrayMap(x -> (x.1, x.3), traceroute) AS val,
CAST((ttls, val), 'Map(UInt8, Tuple(UInt8, IPv6))') AS map,
arrayMin(ttls) AS first_ttl,
arrayMax(ttls) AS last_ttl,
arrayMap(i -> (toUInt8(i), toUInt8(i + 1), map[toUInt8(i)], map[toUInt8(i + 1)]), range(first_ttl, last_ttl)) AS links,
arrayJoin(links) AS link
SELECT COUNT(DISTINCT probe_dst_addr)
FROM (
SELECT
probe_protocol,
probe_src_addr,
probe_dst_prefix,
probe_dst_addr,
probe_src_port,
probe_dst_port,
link.1 AS near_ttl,
link.2 AS far_ttl,
link.3.2 AS near_addr,
link.4.2 AS far_addr
FROM **results__ table**
GROUP BY
probe_protocol,
probe_src_addr,
probe_dst_prefix,
probe_dst_addr,
probe_src_port,
probe_dst_port
)
WHERE
(toString(near_addr) != '::' AND toString(far_addr) != '::') AND
probe_src_port < 28096
Currently, there is no mechanism to verify that the content of the results__ tables aligns with the content of the tables converted to the scamper1 format. A basic sanity check could involve comparing the number of distinct destination addresses.
While executing the upload-tables.sh script, it should be possible to extract the number of affected rows (i.e., the count of distinct destination addresses) from the logs. For example, the logs contain an entry like:
Number of affected rows: 3218865
.This number should match the result of the following query:
Example of query result:
{"uniqExact(probe_dst_addr)":3218865}
The text was updated successfully, but these errors were encountered: