I'm wondering how you would objectively measure the complexity of a set of miniatures rules.
You have to start with "What is a set of miniature wargame rules?" It's a piece of software designed to implement a simulation.
People might balk at the two nouns up there. By software, I just mean a set of instructions intended to be executed in a formal, procedural way. Whether or not you play a set of rules "straight" has nothing to do with the defintion of rules. You are simply creating a different set of rules by written modification, agreement, on the fly changes, or several other means.
This gives us a basis for complexity metrics. Here's my top ten:
1. Rule Length / Word Count – Total number of words/pages in the rulebook.
* A coarse proxy for density and scope.
* Often used in studies of game accessibility lots of places use count metrics like this.
* It's objective, but it is as useful a judge of content as the word count requirement in school essays, or how many bytes a program occupies.
2. Rule Count – Total number of unique rules or clauses.
* Simple baseline; more rules usually means more complexity.
* Doesn't account for rule interactions or difficulty of comprehension. A rose isn't a rose isn't a rose.
* In computer science they call this idea Source Lines of Code (SLOC)
3. Decision Points per Turn (DPT) – Number of distinct choices a player must make each turn.
* Highlights player cognitive load.
* Similar to "branching factor" in chess AI analysis.
4. Interaction Density – Average number of rule interactions triggered by a single rule or action.
* Captures complexity during play.
* Similar to cyclomatic complexity in software testing.
5. Lookup Frequency – How often players must reference the rulebook or charts.
* Can be empirically measured during playtests.
* Audience Dependent. Since we have no way to know what the population of "all wargamers" is, we have no way to estalish a good random sample.
6. Game State Space – Theoretical number of unique configurations (units, terrain, objectives).
* Higher state space often correlates with planning complexity and replayability.
* Similar to complexity in game theory and AI (e.g., Go vs. Tic-Tac-Toe).
7. Cognitive Load – Working memory burden required to execute valid moves.
* Can be estimated using task analysis (e.g., number of concepts a player must juggle).
* Sweller's Cognitive Load Theory, etc. Really needs a controlled environment.
8. Rules Entanglement – Degree to which understanding one rule depends on others.
* Highlights conceptual or procedural bottlenecks.
* Dependency graph analysis (rules-as-nodes, dependencies-as-edges).
9. Ambiguity Quotient – Proportion of rules open to multiple interpretations without clarifying text.
* Derived from expert review or Natural Language Processor ambiguity scoring.
* Ambiguity is audience dependent. Even using an AI NLP, since the score depends on the subjective decisions of the algorithm developers and trainers.
10. Granularity of Simulation -Level of abstraction (e.g., individual soldier vs. brigade).
* Lower granularity often leads to higher complexity per rule, even with fewer rules.
* Modeling & simulation literature (Davis & Hillestad, 1993).
11. Resolution Method Complexity – The mechanical complexity of resolving actions (e.g., d6 vs. differential equations).
* "To hit" roll + "to wound" + "armor save" → 3-step complexity.
* Minimum expression complexity (the simplest way to mathematically express a process) is objective, but not common. This leaves out simplicity gained by a rule that is "intuitive" to a specific audience.