Help support TMP


"Zig-Zag & chasing salvos" Topic


11 Posts

All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.

For more information, see the TMP FAQ.


Back to the Game Design Message Board

Back to the American Revolution Message Board

Back to the Naval Gaming 1898-1929 Message Board



504 hits since 26 Oct 2017
©1994-2017 Bill Armintrout
Comments or corrections?

WarWizard27 Oct 2017 10:44 a.m. PST

Saw this interesting article today

link

UshCha Supporting Member of TMP27 Oct 2017 10:46 a.m. PST

It is common in industry when trying to understand processes to run test cases to check the response of the system, to ensure it behaves as expected. We could do this for rules. Now different periods would need different test cases.

As an example one of our test cases for say WW2 would be three squads of troops in the open emerging from a wood at 400m vs 1 squad dug in in a fighting position. Run the model a few times to see how the result varied and did it accord we it the design intent.

Now the variation in results would be interesting. Not everybody would agree what the answer should be in the real world and it is apparent in the game world folk may not even see that as the ideal solution anyway. Some folk do not like suppression and want a wide variation in the answer, would not. But it would show how what the result for the system being used.

At this juncture this is a thought experiment, would it be of interest to run say 10 simple scenarios to get a hold of the basic driving parameters?

Would it be of interest to an identical simple scenario run against several sets of rules to compare result. It is a qualitative analysis of rules, which does seem to be missing from rule reviews.

GildasFacit Sponsoring Member of TMP27 Oct 2017 10:58 a.m. PST

I'd hardly see the need to run actual plays in simple cases like that, surely a decision tree with probabilities would give you a better answer.

Striker27 Oct 2017 12:15 p.m. PST

I'd be interested to see how different rulesets handle the scenario. I'm not sure how much difference there would be but it may bring out enough information to help someone decide on a particular set (ex. how suppression was handled or how "deadly" a ruleset is).

UshCha Supporting Member of TMP27 Oct 2017 12:31 p.m. PST

GildasFacit its not as simple as that. First a decision tree would not tell you the possible outcomes easily unless you ran it through a Monticarlo Simulation as actually its not that simple. At least not in my rules. It would involve a significant number of activation's and dice roles.
To be fair I may have lied and it may need to run the simple scenario more than 10 times to see a typical range of results.

Furthermore in reality there could be long debates about an acceptable test case. In my rules I would (obviously) demand that the simulation would start at say 400m. Which is well within Light machine gun Range. Some popular games do not have ranges. Crossfire being one that has none and a number have non-linear ranges so do you start at say 1 1/2 half rifle range which is not the same between linear and none linear range systems.

Striker that is part of the assessment you could get. Also how many bounds it takes. How many table need to be consulted etc.

GildasFacit Sponsoring Member of TMP27 Oct 2017 1:16 p.m. PST

If it is too complex for a decision tree then 10 random tries have no hope of giving anything close to a performance mapping of outcomes.

To be honest I think something of this nature on small scale actions is likely to be too subjective for any analysis to yield useful results.

UshCha Supporting Member of TMP28 Oct 2017 1:47 a.m. PST

Not sure I would agree. A graphic example would be say a 4 tank platoon buttoned up in echelon left being attacked from the right by say 2 tanks. And the same with the at takes on the right. How many were eliminated. A graphic look see at the systems ability to model standard manual formations. If the model has the same result for both you would show that the model for good or ill did not model that aspect. To me that is more accessible than a statistical analysis.

nevinsrip Sponsoring Member of TMP28 Oct 2017 10:59 p.m. PST

When I saw the words Zig-Zag I got a whole different mental picture. Or flashback?

FlyXwire29 Oct 2017 4:54 a.m. PST

….and I thought this was about a "Rare Revolutionary War Sword" (it is when read from the AWI board).

USAFpilot29 Oct 2017 12:20 p.m. PST

Interesting idea. Running test cases is essentially collecting empirical data and comparing it to what you think should be the theoretical result. However, like flipping a coin to prove you have 50% outcome of a head or tail; you have to run the test many times. The more flips of the coin, the deviations from 50% starts to tighten up.

These types of tests should be run by the designers during game development,

UshCha Supporting Member of TMP29 Oct 2017 12:47 p.m. PST

USAFpilot,we did do this but at the time we did not keep the data. The idea here was to give some sort of data to perspective buyers. Designers doing it for this resoncould leave folk to suspect the objectivity of the test. Which magazine does its own tests on white goods to ensure they are not biased.

Sorry - only verified members can post on the forums.