I think AI Studio exibits some problematic behaviour that makes it less predictable, less efficient, and inconsistent with how normal rules work. (And how you’d expect a computer to behave)
I think the following rules summarise how a non-AI Studio rule behaves:
- If a rule condition branch evaluates to
true
, it executes the action(s). - If the branch condition evaluates to
false
, it moves to the next branch for evaluation, assuming there is a next branch. - If there are multiple conditions in a branch it evaluates the options from top to bottom.
- For multiple conditions in a branch separated with
and
:
– it moves on to the next condition within the branch whenever the current condition evaluates totrue
, as the branch can still evaluate to bothtrue
andfalse
.
– it stops evaluating and moves to the next branch as soon as it finds a condition that evaluates tofalse
. This makes sense as anand
statement cannot evaluate to true when any of the conditions within it evaluate to false. - For multiple conditions in a branch separated with
or
:
– it moves on to the next condition within the branch whenever the current condition evaluates tofalse
, as the branch can still evaluate to bothtrue
andfalse
.
– it stops evaluating and moves to the next branch as soon as it finds a condition that evaluates totrue
. This makes sense as anor
statement only requires one of it’s conditions to betrue
to evaluate totrue
.
All of this is good, and logical. But that’s not how conditions with AI (studio) evaluations seem to work. They seem to operate with some kind of fuzzy logic where it evaluates all conditions in all branches, and eventually decides which branch is most true.
This is problematic for a couple of reasons:
- It’s expensive, as it requires needless computer power (and tokens)
- It’s inaccurate, as rules can’t be designed with gates where you can assume it only reaches branches further along in the rule when it has sucessfully concluded that earlier branches should not be applied. (hence “Otherwise if”)
For consistency, accuracy, and efficiency I argue that AI evaluation of rules should follow regular conventions for if
, else if
, and
, and or
statements.
Another way of putting it would be that I think AI conditons should just evaluate to a hard true
or false
, and these should be respected without requiring re-evaluation at the end.