One issue for any such decision making is having a unified decision metric (best/good/bad) to allow the different analysis to be finally chosen between solutions. Risk vs Rewards (of different flavors) must be reduced to just a few evaluations factors and hopefully to just one to allow the decision.
Consider that decisions can be based on internal as well as external situational factors (ie- if damaged when does healing override attack, or when does to chose defensive tactics over offensive)
If you have actions that aren't atomic (ie- take turns to build up an/or can be canceled partway through if the situation changes) then you have decisions about whether its 'better' to carry through with an action(utilizing the expediture of effort already used) , or through reevaluation decide to change the previously selected tactic to do something currently more appropriate.
Other types of decisions between single targets versus opportunities to hit grouped multiples (requiring scanning situation for those groupings) or effect of 'cover' where targeting opportunities change rapidly (when only short time to try to hit target when its in the open or under low defensive cover, etc...) "Utility of position" to find good spots with open fields of fire, good defensive cover, and where the enemy has other disadvantages... (
Situtions have to be looked at many ways to spot opportunities (or the best of bad options - like digging in to minimize ones damage til the situation gets better) and somehow that evaluation has to add up mathematically to make good decisions
If multiple units on the same side are to be coordinated (and other side is expected to coordinate themselves) that explodes much further into complexity for the AI.