Attempts to optimize simulated behaviors have typically relied on heuristics. A static set of if-then-else rules is derived and applied to the problem at hand. This approach, while mimicking the previously discovered decisions of humans, does not allow for true, dynamic learning. In contrast, evolutionary programming can be used to optimize the behavior of simulated forces which learn tactical courses of action adaptively. Actions of Computer-Generated Forces are created on-the-fly by iterative evolution through the state space topography. Tactical plans, in the form of a temporally linked set of task frames, are evolved independently for each entity in teh simulation. Prospective courses of action at each time step in the scenario are scored with respect to the assgned mission (expresssed as a Valuated State Space and normalizing function). Evolutionary updates of the plans incorporate dynamic changes in the developing sitution and the sensed environment. This method can operate at nay specified level of intelligence.
展开▼