Execute Parallel non-Action nodes + fix for ghost node as result of parallel nodes being incorreclty imported#207
Conversation
|
I've also decided to work on the implementation of non-action nodes triggering when they come from a parallel setup. As I mentioned here I noticed that when a trigger would be connected to multiple non-actions it would only result in HA sending a log entry, since the YAML outputted would look like parallel:
- service: system_log.write
data: { message: "Node: condition_X" }
- service: system_log.write
data: { message: "Node: condition_Y" }I'm now testing a change that outputs the following YAML instead: parallel:
- alias: parallel_branch:condition_X
if:
- condition: state
entity_id: binary_sensor.door
state: "on"
then:
- service: light.turn_on
alias: Door Light
- alias: parallel_branch:condition_Y
if:
- condition: state
entity_id: binary_sensor.window
state: "on"
then:
- service: switch.turn_on
alias: Window Fan |
…g non-action nodes.
|
I can also confirm this change works with the state machine path, as that was where the original problem originated. It never failed on the native path. Generated YAML: alias: Test parallel condition
description: ""
triggers:
- trigger: time_pattern
seconds: /5
- trigger: time_pattern
seconds: /5
conditions: []
actions:
- variables:
current_node: >-
{% if trigger.idx == "0" %}__parallel_trigger_0{% else
%}action_1776339154188_3{% endif %}
flow_context: {}
- alias: State Machine Loop
repeat:
until:
- condition: template
value_template: "{{ current_node == \"END\" }}"
sequence:
- choose:
- conditions:
- condition: template
value_template: "{{ current_node == \"__parallel_trigger_0\" }}"
sequence:
- parallel:
- alias: parallel_branch:condition_1776337863184_1
if:
- condition: sun
after: sunrise
then:
- data:
message: Sun is up
level: info
action: system_log.write
- alias: parallel_branch:condition_1776337913287_3
if:
- condition: sun
after: sunset
then:
- data:
message: Sun is down
level: info
action: system_log.write
- variables:
current_node: END
- conditions:
- condition: template
value_template: "{{ current_node == \"condition_1776337863184_1\" }}"
sequence:
- variables:
current_node: >-
{% if is_state('sun.sun', 'above_horizon')
%}action_1776337883076_2{% else %}END{% endif %}
- conditions:
- condition: template
value_template: "{{ current_node == \"action_1776337883076_2\" }}"
sequence:
- data:
message: Sun is up
level: info
action: system_log.write
- variables:
current_node: END
- conditions:
- condition: template
value_template: "{{ current_node == \"condition_1776337913287_3\" }}"
sequence:
- variables:
current_node: >-
{% if is_state('sun.sun', 'below_horizon')
%}action_1776337920344_4{% else %}END{% endif %}
- conditions:
- condition: template
value_template: "{{ current_node == \"action_1776337920344_4\" }}"
sequence:
- data:
message: Sun is down
level: info
action: system_log.write
- variables:
current_node: END
- conditions:
- condition: template
value_template: "{{ current_node == \"action_1776339154188_3\" }}"
sequence:
- data:
message: Secondary trigger
level: info
action: system_log.write
- variables:
current_node: END
default:
- data:
message: "C.A.F.E.: Unknown state \"{{ current_node }}\", ending flow"
level: warning
action: system_log.write
- variables:
current_node: END
mode: single
variables:
_cafe_metadata:
version: 1
strategy: native
nodes:
trigger_1776337839400_0:
x: -285
"y": 30
condition_1776337863184_1:
x: 150
"y": -75
action_1776337883076_2:
x: 480
"y": -15
condition_1776337913287_3:
x: 135
"y": 135
action_1776337920344_4:
x: 450
"y": 150
trigger_1776339142459_2:
x: -210
"y": 360
action_1776339154188_3:
x: 240
"y": 375
graph_id: a58ae811-c30a-4c4f-bf32-00aaedaf36d8
graph_version: 1
with these logs: |
|
Further testing revealed another issue, a fix is incoming. When a trigger fans out to multiple nodes (parallel branches), the transpiler was emitting every node in those branches twice, once inlined inside the parallel: block, and again as standalone state machine choose entries that could never be reached. What I did to fix this was change the transpiler so it now collects all node IDs consumed by parallel branches and excludes them from the standalone choose entries, and the parser was updated to reconstruct nodes directly from the inline branch content instead of relying on those (now-removed) duplicates. |
…er importing/loading an automation. Fixed a structuring issue where parallel nodes were created duplicatively.
|
@FezVrasta can you please take a look? |



Fix phantom node on parallel trigger reimport
When a trigger fans out to multiple targets, the transpiler generates a synthetic
__parallel_trigger_Nchoose block. The parser didn't recognize this construct, creating a phantom "Action" node at (0, 0) with empty data. The original targets became orphaned.Changes
Phantom node fix (
e422cf2)parseStateMachineChooseBlocknow capturesparallelarrays from synthetic choose blocksextractParallelTargetIdsmethod resolves actual target IDs from the parallel items (handles bothsystem_log.writeplaceholders and real action calls)__parallel_trigger_*entries are removed fromnodeInfoMapbefore node creation and expanded into direct trigger-to-target edgesType-aware ID assignment (
ac2a202)getNextNodeIdinparseSimpleActionsnow assigns metadata IDs by node type prefix, preventing ID misordering when parsing parallel branchesTests
Two round-trip tests added to
parallel-trigger-branches.test.ts:Both assert no
__parallel_trigger_*phantom nodes exist and verify correct edge connectivity after a full transpile-then-parse cycle.Should fix issue 198.