Help support TMP


"Slowing Down the Game (1)" Topic


74 Posts

All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.

Please don't make fun of others' membernames.

For more information, see the TMP FAQ.


Back to the Game Design Message Board


Action Log

15 Nov 2023 12:25 p.m. PST
by Editor in Chief Bill

  • Removed from TMP Poll Suggestions board

Areas of Interest

General

Featured Hobby News Article


Featured Recent Link


Featured Showcase Article

GF9 Fire and Explosion Markers

Looking for a way to mark explosions or fire?


Featured Profile Article


Current Poll


3,633 hits since 4 Feb 2022
©1994-2025 Bill Armintrout
Comments or corrections?


TMP logo

Membership

Please sign in to your membership account, or, if you are not yet a member, please sign up for your free membership account.

Pages: 1 2 

Personal logo etotheipi Sponsoring Member of TMP14 Dec 2022 9:47 a.m. PST

A lot of the 'hypothetical' as you define is just as real as anything that has happened, past tense. It is not hypothetical that if you jump off a three-story building, which you've never done before, your trajectory and the results are known, not 'hypothetical' conclusions beforehand.

No, the trajectory is hypothetical.

Based on a number of different factors and conditions, you can calculate a number of different trajectories, each of which has a certain probability of being the actual one depending on how well we can predict the actual factors and conditions.

Although I would prefer a ten-story building so that a larger number of factors and conditions could have larger effects on the actual trajectory for the jump, but a person jumping off a building is a great example of the difference between reality and a hypothetical.

You can calculate a hundred different trajectories and an actual event could match none of those.

When you talk about statistics, you probably mean parametric statistics, which are the tools for hypotheticals. Actual empirical data should be handled with non-parametric statistics. And they are fundamentally different.

In school, we often abuse the idea of statistics by letting people believe that parametric statistics (and the normal curve, in particular) are reality.

Your trajectory for me jumping off the building includes an instantaneous vector at the terminus, roughly, a speed and direction of motion when I it the ground. IF you represent speed as a normally distributed variable, since the normal curve proceeds to infinity in both directions, there is a chance (hopefully small, but sometimes not), that the model would produce a negative speed. But the real world situation doesn't.

There are a number of tools we use to handle those situations for hypothetical results. The most common one is we ignore results that don't make sense. When we develop the mechanisms to implement the model, we often constrain the implementation to not produce those results.

This is pretty simple and obvious and we don't expend a lot of brain energy dealing with this for a simple thing like negative speed. However when we talk about more complex structures that are vectors that include many scalars, it becomes less obvious to see those problems. And when we talk about structures that represent the outcome of a history (a vector that is a series of states influenced by the last stat), it becomes less easy to account for them.

That's why it is important to deliberately use the right tools for the right things. Part of that is knowing which things are which.

the events are provided a real environment, BASED on past realities.

So, again, no. Find a military analyst that gives you an absolute single answer to a problem instead of a range of likely outcomes assigned to probabilities and caveated by assumptions, and you will find a military analyst who has been fired for incompetence.

McLaddie14 Dec 2022 4:30 p.m. PST

That's why it is important to deliberately use the right tools for the right things. Part of that is knowing which things are which.

etotheipi:
I couldn't agree more. [Hypothetically speaking.]

the events are provided a real environment, BASED on past realities.

You seem to be defining 'reality' as only what has happened and everything forward is 'hypothetical' based on the falling off the roof scenario, but then you say this above, and I'm not sure.

That 'real environment' is based on reality. The poor schmuck who falls off the ten-story building in NY, isn't going to face plant in NJ 'hypothetically' or to put it in your terms 'in a real environment.'

I would think that any reality found in a wargame or simulation has its options, parameters and mechanics based on reality, whether ten minutes ago or two hundred years. In a very real sense, we don't live in the past at all but are continually existing in the future, dealing with 'hypotheticals' that haven't happened yet. To succeed, any actions have to be based on past reality.

So, getting out of the philosophical, to design a dynamic simulation, one that mimics reality, it has to have all its options/hypotheticals based on reality. That is what I *think* you are saying with that last statement.

There is the place reality/realism occupies in simulation design and play. Without that element, you aren't simulating history OR reality, not even hypothetically.

Find a military analyst that gives you an absolute single answer to a problem instead of a range of likely outcomes assigned to probabilities and caveated by assumptions, and you will find a military analyst who has been fired for incompetence.

He will also be fired if his 'range of likely outcomes etc. are not methodically determined. Done correctly, competently, that isn't 'hypothetical', as in hypothesis, but an established 'range' based on reality. As you have said, that requires focus and a methodical approach. "Part of that is knowing which things are which."

Which brings us back to wargames and determining the behavior of units in combat under the heading of 'morale.'

Either the parameters for that set of behaviors are based on the actual historical probabilities, i.e. reality, or it isn't a simulation or wargame recreating that aspect of history/reality. You can't have the poor schmuck landing in NJ falling off a building in NY and say that is 'realistic' or 'hypothetical' with any justification.

So, the question is how to create and establish that realism, not as a hypothesis, theory or opinion, but as the historic options and possibilities in our wargames that reflect reality. That is the 'how to' that I am interested in as a player and wargame designer.

And how that can be done without 'slowing down the game.' grin

Personal logo etotheipi Sponsoring Member of TMP14 Dec 2022 5:05 p.m. PST

So, as you say, you're not modeling reality, your modeling a hypothetical outcome based on reality, but not reality.

The modeling tools for hypothetical things are different. If you used the same tools as the ones for empirical data, then you can end up with the guy faceplanting in NJ. The way you avoid that is you treat hypothetical models as hypothetical instead of as reality.

not as a hypothesis, theory or opinion, but as the historic options and possibilities in our wargames that reflect reality

Options are not reality and are based on theory and opinion. Hopefully they're informed by facts and reasoned judgement.

Personal logo Old Contemptible Supporting Member of TMP14 Dec 2022 8:59 p.m. PST

The trend has been for several years towards a quick 1 to 2-hour skirmish (single base) figure. When I started out in historical miniatures it was all about the big battalions, big battles that would go on for six to eight hours. I loved it. Now they are few and far between.

McLaddie15 Dec 2022 10:30 p.m. PST

So, as you say, you're not modeling reality, your modeling a hypothetical outcome based on reality, but not reality.

etotheipi:

Thank you for the clarification. With that definition of 'hypothetical' regarding simulations, I can see why you would shy away from 'realism and reality in describing simulations and wargames.

If I decide that Napoleonic firefights should always resolve with one side retreating or routing, that is a hypothetical, and certainly not 'realistic' or 'reality' as you say. In that case, I would shy away from those descriptors too. As this is what most wargame designers do as a simulation design methodology, I'd say is disingenuous to talk about realism in regard to their designs.

If, on the other hand, I find 100 examples of firefights in the historical record, and of those all but 2% were never resolved but continued until reinforcements arrived, one side charged or one side was attacked in the flank, and create mechanics to model that, statistically there is realism involved. IF I then find 20 more examples and test my results against them and the historic results stay true to those statistics…

then I can say that the rules are realistic and do simulate reality. A hypothetical becomes something else when the 'hypothesis' has been proven to model past reality. That's what you do with the hypothetical.

This is not true for predictive new, never tried actions, designs, tactics etc. which is something different, though the process to develop any meaningful model does have its methodologies for proofs… ones that have been tested many times. Those types of models are created daily with ever greater success over the decades.]

Most simulation designers make that effort to establish a hypothesis based on a significant data base and then test the validity to say their system simulates reality. That's the way it works for most simulations and why 'realism' and 'reality' can be and usually is spoken of in regards to simulations. Perfect realism? No, not possible. But it isn't hypothetical either.

Personal logo etotheipi Sponsoring Member of TMP16 Dec 2022 2:11 p.m. PST

A hypothetical becomes something else when the 'hypothesis' has been proven to model past reality.

Not really. It's still a hypothetical. This is why we say "All models are wrong. Some models are useful." If it's useful to model a specific set of response surfaces, then that's all it is.

For any response surface, I can give you an infinite number of models that result in that output. Are they all reality?

Here's some attrition data:

picture

And here are three green models that fit the data:

picture

Are they all reality? Is reality all these things? Is reality an infinite number of things? What about …

picture

… the red model? It fits the data points. When I was in combat in the military I would have been happy to be in the category of the people who periodically came back to life after being attrited.

Even though there is an obvious issue with the red model, it is still useful to model the referent. It' doesn't matter how complex your desired referent and model are, the principle still applies. In fact, in a five dimensional model (distance, relative azimuth (flank), morale, C2 quality, number of troops firing) it becomes harder to "see" the obvious problem. After six dimensions, I can't visualize it at all.

McLaddie16 Dec 2022 9:49 p.m. PST

Are they all reality? Is reality all these things? Is reality an infinite number of things? What about …

etotheipi:

If you are only going to use 4 data points, then it is easy to have any number of systems hit all 4.

Simulation designers don't use just 4 data points for anything. When you have 40 or 100 or 200 data points, it becomes difficult enough to create one system which hit all of the points, let alone 4 different systems. No simulation designer would waste their time with just 4 data points or create 4 systems to do one thing.

And even when a designer gets a simulation to hit all the data points, he or she still has to test the system against reality to verify its ability to simulate the chosen aspects of reality.

Is reality an infinite number of things?

Yeah, it is, which is why it requires focus and methodology to simulate just a few aspects of it as you point out.

If a simulation is hypothetical and as you say, "it is still useful to model the referent," then to what? certainly not the hypotheticals… referent to the hypothesis? it's designed to be. That is a given. You mean a useful referent in regards to reality. Either a hypothesis is a useful referent and useful in regards to what they are simulating, or they don't simulate anything but someone's hypothesis.

Insisting that all simulations remain only hypotheticals flies in the face of the thousands of ways they are used every day, including computer games like Flight Simulator.

Either that or one is just ginning up hypotheticals without any interest in establishing whether they are true or not.

Personal logo etotheipi Sponsoring Member of TMP17 Dec 2022 7:35 a.m. PST

If you are only going to use 4 data points, then it is easy to have any number of systems hit all 4.

Simulation designers don't use just 4 data points for anything. When you have 40 or 100 or 200 data points, it becomes difficult enough to create one system which hit all of the points, let alone 4 different systems.

So, I already said it makes no difference how many points. If you have 20,000,0000, that still doesn't overmatch an infinite number of models.

Anscombe's Quartet is the traditional explanation of where statistics break down.

picture

link

Simulated annealing gives a more modern take on it.

The solution offered for those problems is data visualization. Look at the red model and realize it does not represent the referent you are going for. However, as I pointed out, a five dimensional model like the simple one that is not out of norms for wargaming is difficult to visualize, let alone validate with a visual heuristic.

No simulation designer would waste their time with just 4 data points or create 4 systems to do one thing.

No, four systems is probably a very low limit for the number of different models used by the military in simulations as they build models that are useful for more and more purposes.

A simple list of twleve: link All twelve of these models have a few common, overlapping use cases, yet all of them are different models. C-SNAP-REV contains elements that C-SNAP does not. But they can both be used to model some of the same things. And the newer one is not appropriate for use for some things the older one is.

And even when a designer gets a simulation to hit all the data points, he or she still has to test the system against reality to verify its ability to simulate the chosen aspects of reality.

This is just bad logic. A set of data you design to plus a set of data you test against is simple a bigger set of data you design to. No matter how many data sets you use, it's still a single finite data set.

Is reality an infinite number of things?

Yeah, it is, which is why it requires focus and methodology to simulate just a few aspects of it as you point out.

You are taking that question out of context. The question was not about the scope of reality but about reality being an infinite number of different things. That's just bad rhetoric.

You mean a useful referent in regards to reality.

Not, it's only valid to model the referent that was used to construct it. Nothing else.

Insisting that all simulations remain only hypotheticals flies in the face of the thousands of ways they are used every day, including computer games like Flight Simulator.

So, go to a military flight simulator. One used to train pilots that will fly planes in reality. Ask them, "IF I can get the simulator to safely execute a maneuver, then I will be safe to execute it in real life?" Their answer will be along the lines of, "No. The simulator is valid for doing a certain number of things under certain conditions. IF you use it outside those fixed conditions, we do not guarantee its validity."

This is one of the big problems with OPFOR going off script in military wargames. They believe that the simulation represents reality, not just a specifically selected referent. So they believe anything they can cause to happen with the inputs is valid. Usually it is not because …

No simulation designer would waste their time (and someone's dollars) trying to gold plate* a simulation for functions it was not requested to do.

* – This is a term of art in the US federal government. It refers to the fact that it is illegal for a provider on a government contract to add functionality not requested to a deliverable, then charge the government for the additional effort. There are other laws that prevent them from doing that for free.

This is also why I don't want to play WH40K. They have tomes and tomes of rules that I would pay for, and then only use 5-10% of them over the course of dozens of games. Why would I pay for a bunch of rules that cover things I am not going to do? A game a little bigger than what I am going to cover is fine.

Also, as you pile rules on rules on rules, the input space grows geometrically. You can't keep up with that in your testing. That's why SJG's Murphy's Rules is so entertaining…

Mark J Wilson19 Dec 2022 3:45 a.m. PST

Re Anscombe's quartet, I'd suggest that statistics only break down if the statistician is stupid enough to insist all answers are linear and all data points must be included. Top right is clearly some sort of parabolic equation, bottom line both become effective as long as the obvious outlier is discarded.

My question about using any form of statistics with any but the most modern rules to reflect weapon efficiency is where are the data points. I'd be surprised if you cna generate 4 in most cases and I'd strongly suspect that any 4 could be challenged as cherry picked..

Personal logo etotheipi Sponsoring Member of TMP19 Dec 2022 6:37 a.m. PST

Mark J Wilson, you are making exactly the point I have been trying to make. Reality is much broader than any data set we build/test to. You have to make assumptions about the rest to build a model.

For modern weapon system testing, we do a lot of piecewise build up. We do system level tests on the bench, integration tests, partial tests (like putting a missile emitter on a stick on the top of a cliff then flying planes at it instead of it flying at planes). And as you say, a limited number of Live Fire Test and Evaluation (LFT&E). Some things like ballistic missiles, we are lucky to get one; other things like AA missiles, we do shoot quite a number.

Regardless, because the geometries, potential targets, and environmental conditions have such scope, it is hard to get enough shots under most of the relevant conditions. We do a type of cherry picking called "design of experiments". We start with a model of what we expect to happen and then we figure out the most important shots to take to get the best data. You can spend a week at a T&E conference and only talk about different methods for DoE 12 hours a day.

Yes, when we do low level tests and we integrate the models and data it often performs differently than when we integrate and test the real thing. It's not always the integration that is the issue. Minor variances in the component models can have significant effects when brought together. But we learn and move on.

What we do is build confidence about the in-between parts/ And because we have a long history of different data for similar (but not the same) systems, we can aggregate it, but always with the "null hypothesis", the assumption that we are wrong and the search for data to prove we are wrong.

When we don't find that data proving we are wrong, we increase our confidence that we are close, within the limits of what we have empirically seen.

Usually, we work to a level of confidence that makes the model useful for our purpose. But we never say we have reality in the model.

Robert Johnson19 Dec 2022 3:04 p.m. PST

etotheipi, have you worked with, or met, Jeff Baxter?

Personal logo etotheipi Sponsoring Member of TMP22 Dec 2022 6:03 a.m. PST

The guy from Steely Dan? Nope. Why?

Mark J Wilson22 Dec 2022 9:48 a.m. PST

Etotheipi

My point/question concerns rules for periods before 1900 when this sort of data analysis wasn't done, yet rules are written that pretend the information exists.

Personal logo etotheipi Sponsoring Member of TMP22 Dec 2022 12:04 p.m. PST

Mark J Wilson, if we go just prior to 1900, there is a great example. The spoiler is … it depends on what you are modeling.

On the modeling side, are we using one die roll or a couple for an attack? You might have Pk (prob kill) = Ph (prob hit) X Pd (prob of damage). So you roll for a hit then roll for damage because you want different factors affecting each roll in different ways. Or, if you want to aggregate and "average" those factors, you convolute the Ph X Pd and just do one roll against a Pk number. Of course, Ph and Pd are aggregates of many things themselves, and could be broken down further. Ph might, for instance, be separated into quality of troops (static) and confidence level (dynamic during the game). And so on.

Now to history, we have many battles where we have reasonably detailed Pk ratios. So modeling that is reasonable. Where we forget what we learned in elementary school is when we calculate a Pk of 0.084615 and our numbers for pre and post battle are 13,000 and 11,900 troops. Our inputs were to two significant figures, but we let our output have five.

The second deception is that while we can calculate Ph and Pd components of Pk, the fact that they aggregate to 0.08 doesn't mean we have modeled "reality". There are an infinite number of ways to get to 0.08. You can get the right numbers with the wrong model, so you need some of the missing information you are talking about to model more factors.

An interesting application comes from the ACW (or WNA). There's a lot of "facts" about either the Springfield or Enfield is better with combat ratios to prove it. Of course, all their different models built on different assumptions with different numbers gave the right answer.

A couple of years ago I saw an article in Guns and Ammo where a Springfield and and Enfield were restored with period techniques and an expert marksman took them out for a spin. Turns out both rifles performed the same. Not proof, but a strong indicator that all the mythology about one rifle being superior to another was just that, mythology.

There are some things you can model and have data, but a lot of the time we are modeling a hypothetical interpretation of the data.

Mark J Wilson23 Dec 2022 4:04 a.m. PST

"There are some things you can model and have data, but a lot of the time we are modeling a hypothetical interpretation of the data".

My argument is that pre 1900 we are always modelling hypotheticals, i.e our own personal biased view. Take Pk, do we have data for rounds fired to hits even without range/target formation/cover input. I doubt it. My argument is then that pretending we do and modelling a series of steps that are pure fiction we slow down the game when the thing we need to model is simply the result, which we can do in one step.

Personal logo etotheipi Sponsoring Member of TMP23 Dec 2022 1:29 p.m. PST

Take Pk, do we have data for rounds fired to hits even without range/target formation/cover input.

That assumes we are saying in our model "one roll" equals "one shot". For attrition data, we usually have a start and end point. Some times we have estimates (or rarely, numbers) of where we are along the way. For example, if a sub-unit of the total force gets pulled back, retreats, or routes, we can reasonably assume that group took all their casualties before that event happened.

If we only have the two points, this usually leads to the red model below, but any of the other ones fit the two points of data.

picture

However, most detailed attrition looks more like this:

picture

So the question is what are we modeling? Attrition is not just one thing, but a lot of things. If we only claim to be modeling what we know, that's fine. If we described the hypothetical referent as some assumptions about what went on "between the points", then it is OK to model that.

It's only when we say we're modeling "reality" that we have a problem.

McLaddie27 Dec 2022 3:57 p.m. PST

etotheipi:

Happy Holidays. Well, after two weeks of baking springerlees, pies, roasts and candy, hedonistically consuming the same, exchanging presents and enjoying family, I finally sat down to review all that etotheipi has written. So, let me try and clarify and see if I follow your arguments, etotheipi. [Correct me if I'm wrong here]:

You can calculate a hundred different trajectories and an actual event could match none of those.

You have provided lots of graphs and links to that point. I certainly agree. You can calculate a multitude of ‘trajectories' and never ‘match' an actual event. Matching actual events remains the goal with many, maybe most simulations.

When you talk about statistics, you probably mean parametric statistics, which are the tools for hypotheticals. Actual empirical data should be handled with non-parametric statistics. And they are fundamentally different.

Okay, no argument there other than why you assumed I wasn't thinking of empirical data or non-parametic statistics.

McL: "A hypothetical becomes something else when the 'hypothesis' has been proven to model past reality."

etotheipi: Not really. It's still a hypothetical. This is why we say "All models are wrong. Some models are useful." If it's useful to model a specific set of response surfaces, then that's all it is.

McL: "You mean a useful referent in regards to reality.".

etotheipi: Not, it's only valid to model the referent that was used to construct it. Nothing else.

The ‘referent' we are talking about are aspects of reality to be simulated. If the model can't mimic those references, it fails, it's not valid. The notion that all models are only a hypothesis seems to rest on the fact they can't cover everything, ALL the little bits of Reality? It that the case?

Those ‘useful' models are what I am talking about AND what makes them 'useful.'

You provided links to sites about Anscombe's Quartet:

First Link quote: Statistics are great for describing general trends and aspects of data, but statistics alone can't fully depict any data set. Francis Anscombe realized this in 1973 and created several data sets, all with several identical statistical properties, to illustrate it. These data sets, collectively known as "Anscombe's Quartet," are shown below.

I would agree with this. There are limits to statistics, no argument there. The question isn't how many data sets can produce identical statistical properties, but whether one or all of them match the aspects of reality targeted. There is no reason why more than one can't work equally well within a simulation.

Second Link quote: Same Stats, Different Graphs:
"Datasets which are identical over a number of statistical properties, yet produce dissimilar graphs, are frequently used to illustrate the importance of graphical representations when exploring data."

…it's only valid to model the referent that was used to construct it. Nothing else.

Agreed. So, if the ‘referent' is some aspect of reality? What makes it ‘valid?' Does that negate its ability to mimic reality simply because there is a limited set of ‘referents', only part of reality?

McL: Insisting that all simulations remain only hypotheticals flies in the face of the thousands of ways they are used every day, including computer games like Flight Simulator.

etotheipi: So, go to a military flight simulator. One used to train pilots that will fly planes in reality. Ask them, "IF I can get the simulator to safely execute a maneuver, then I will be safe to execute it in real life?" Their answer will be along the lines of, "No. The simulator is valid for doing a certain number of things under certain conditions. IF you use it outside those fixed conditions, we do not guarantee its validity."

So, because there is a horizon or limit on what the flight simulator can simulate, the content inside remains hypothetical and not actually simulating aspects of reality?

Reality is much broader than any data set we build/test to. You have to make assumptions about the rest to build a model.

As the Scots say, "Mony a mickle makes a muckle."[many little things make up the large one.] That point has never been an issue. Is the argument that because any one simulation can't capture Reality entire in one data set, nothing we capture in a simulation can model reality? If that is your argument, it strikes me as an all or nothing view of modeling reality.

Now to history, we have many battles where we have reasonably detailed Pk ratios. So modeling that is reasonable. Where we forget what we learned in elementary school is when we calculate a Pk of 0.084615 and our numbers for pre and post battle are 13,000 and 11,900 troops. Our inputs were to two significant figures, but we let our output have five.

There are some things you can model and have data, but a lot of the time we are modeling a hypothetical interpretation of the data.

It would seem you have gotten into non-parametric statistics here if we are talking about distribution-free statistics.

That hypothetical interpretation of data at various points is quite true. It is true of any effort to model or theorize about the real world, whether Einstein's theories or Big Mike's calculating the collection routes for the company's trash trucks.

What pulls both from the realm of the hypothetical into reality is that each is tested against reality.
The other thing that science has in common with simulation design is that both invariably deal with just parts of reality. No one says Einstein's General Relativity isn't a valid model of reality but remains hypothetical because it only covers gravity and not ALL of reality, particularly all those little bits and quantum.

So the question is what are we modeling? Attrition is not just one thing, but a lot of things. If we only claim to be modeling what we know, that's fine. If we described the hypothetical referent as some assumptions about what went on "between the points", then it is OK to model that.
It's only when we say we're modeling "reality" that we have a problem.

I would agree with that too, what are we modeling? When there is no testing of the model against reality, including all those hypothetical referents and assumptions they do remain hypotheticals. Either they are valid hypotheticals and assumptions and work to model the reality the system was designed to model, or they fail in that comparison.

Personal logo etotheipi Sponsoring Member of TMP27 Dec 2022 4:51 p.m. PST

The question isn't how many data sets can produce identical statistical properties, but whether one or all of them match the aspects of reality targeted. There is no reason why more than one can't work equally well within a simulation.

So there are one or more realities?

This is one of the problems with saying we are modeling reality, which you have finally stopped saying. You started saying "aspects of reality"

So, because there is a horizon or limit on what the flight simulator can simulate, the content inside remains hypothetical and not actually simulating aspects of reality?

Which is what I started with, even if I called it a referent.

And it's not a limit or a horizon. A simulation is not valid outside the referent used to construct it, but it also isn't valid for the in-between spaces, either.

Is the argument that because any one simulation can't capture Reality entire in one data set, nothing we capture in a simulation can model reality? If that is your argument, it strikes me as an all or nothing view of modeling reality.

I said nothing of the sort. A simulation is entirely constructive. There is nothing in it except what is put into it. You don't get reality out of it, just the aggregation of the bits you put into it.

When there is no testing of the model against reality,

So the act of testing makes it reality? That makes no sense. Either it is reality or it is not reality. Testing just demonstrates what matches what.

A lot of models and simulations that use them don't – and can't – get tested against reality. They model a reality we can't and don't experience. The military has run thousands upon thousands of simulations of events that have never happened and hopefully never will.

McLaddie01 Jan 2023 9:45 p.m. PST

So there are one or more realities?

This is one of the problems with saying we are modeling reality, which you have finally stopped saying. You started saying "aspects of reality."

etotheipi:

Well, Happy New Year! I agree, there is only one reality, and it's now 2023. Actually, I was saying 'parts of reality' and 'realism' in previous posts. 'Aspects' is just another way to say it.

I don't see it as a problem. It is just reality. wink

And it's not a limit or a horizon. A simulation is not valid outside the referent used to construct it, but it also isn't valid for the in-between spaces, either.

Yes, that 'referent' is the point of contention, me thinks.

A simulation is entirely constructive. There is nothing in it except what is put into it. You don't get reality out of it, just the aggregation of the bits you put into it.

Here I would disagree: "You don't get reality out of it, just an aggregation…"

So the act of testing makes it reality? That makes no sense. Either it is reality or it is not reality. Testing just demonstrates what matches what.

Yes, what it matches… and that 'either/or, is or isn't' is a point regarding simulations. And yeah, the term 'referent' is a point of contention, me thinks.

A lot of models and simulations that use them don't – and can't – get tested against reality. They model a reality we can't and don't experience. The military has run thousands upon thousands of simulations of events that have never happened and hopefully never will.

Yes, it is true, any number of simulations can't be tested against reality, so remain hypothetical in their processes and results. I think, when you say "They model a reality we can't and don't experience" you simply mean a referent that can't be tested, not a reality that can't be experienced…

Wargames, miniatures and board, can be like that, such as simulating World War III, though most wargames attempt to model the past, so have something to test against.

My experience in my career was designing training and knowledge-dynamic, participation simulations and games for education and business.

You have emphasized and have given several examples how all models are wrong and can't model everything. No argument there. My work had to focus on models, simulations that were "useful." If they weren't, i.e. wrong, then I didn't get paid.

So, what made them 'useful?' What made them useful was that there was a 1:1 match to aspects of reality, where participants could take the knowledge and skills learned in the simulation environment, and immediately use them in their real environment--you know, back out in Reality.

I call that 1:1 match "realism" or capturing aspects of reality in the simulation--which is the point of most simulations in a variety of research and application arenas. Simulations would be "useless" if they couldn't do this.

I find that 1:1 correspondence between the 'referent' reality and the simulation sometimes magical in how well it can capture reality, that is, work when done well.

A part, an aspect of reality has been captured. The argument that it isn't THE Reality, or that there is so much inside and out Reality which isn't modeled and can't be misses what has and can be accomplished. Call it a glass completely empty vs one which is half-full.

It also misses an important aspect of our experience of reality. You and I never relate to and deal with more than a very, very small part of Reality. In other words, our entire experience of reality is referential in the strictest terms. So, saying a participant's experience within a wargame/simulation game can't capture something of reality is I think a real obstacle to seeing what simulations can do visa vie capturing Reality.

It also confuses the hell out of what designers are trying to accomplish or think they are. For instance, the mechanics for a WWI Tannenberg game where the Russians communications are crap and the Germans are listening in on their unencrypted radio messages. Is that a hypothetical 'maybe' of what it *might* be like. Just one designer's opinion of
what? What it was really like? [emphasis on the real]

In the end, I am interested in how many ways models, wargames and simulations can be wrong simply to ensure my designs are 'useful.' And that usefulness can be and often is in direct relation to how well the procedural system can model reality--if only in part, only some 'referents.' Yet, they are captured so well that knowledge and skills, the dynamic experiences that the participants gain are directly transferable to, and 'like' reality.

I'd suggest that whether research, science, management, military or entertainment, if simulations couldn't effect that relationship to reality, they would not be used at all, let lone be a growth industry.

We can differ on this relationship or lack of one between Reality and wargames/simulation. I do find that my beliefs, experiences and approach is far more 'useful' in creating simulation games.

Arjuna02 Jan 2023 4:13 a.m. PST

Thank goodness I didn't discover this thread earlier, or my AI buddy and I might have joined in …
:)

Personal logo etotheipi Sponsoring Member of TMP02 Jan 2023 10:20 a.m. PST

I agree, there is only one reality,

So if there is one reality, but an infinite number of models to represent any set of aspects, are you modeling reality or just your referent?

Here I would disagree: "You don't get reality out of it, just an aggregation…"

A simulation is purely deterministic. You can't get anything out of it that is not put in. Even when you use referee judgement, you are producing a simulation output for the referee and accepting an input from the referee.

Yes, if you give a simulation different inputs you may get different results. But if you give a simulation the exact same inputs, you will get the exact same outputs. There is nothing more there.

I think, when you say "They model a reality we can't and don't experience" you simply mean a referent that can't be tested, not a reality that can't be experienced…

Historical What-If.

imulations that were "useful." If they weren't, i.e. wrong, then I didn't get paid.

Correct and wrong is a different modality of measure than useful or not useful.

Many everyday items are designed and tested with models and simulations that are wrong. Cars, buildings, bridges, power tools, etc. The models are wrong. Demonstrably wrong. But they're still useful for certain applications, so we use them.

Yet, they are captured so well that knowledge and skills, the dynamic experiences that the participants gain are directly transferable to, and 'like' reality.

Within the training application space, the classic example is The Karate Kid. Sand the floor. Paint the fence. Wax on, wax off. These are not karate moves, but there is training transfer (in this case psycho-motor). That concept applies more broadly.

For example, in a gridiron (or any other, I belive) football game, you are never going to run through a lined up series of tires. As opposed to the karate example, this activity is different, yet still has value for training transfer. And it doesn't model the reality of the game it is used to train for.

Likewise, the Navy has "mass conflagration" damage control training is not intended to model reality. It's intended to give you a harder problem than you will ever face in reality, and one that is not possible to face in reality (because of the nature of the cascading casualties presented). Similar to the ST:TOS Kobayashi Maru scenario, except you can win. You're not fighting an impossible to defeat enemy, just one that is impossibly tough. That also has training transfer and utility, but deliberately does not correspond to reality.

Occasionally, we do introduce a threat that is impossible to defeat (and we deliberately break with reality to present it) because the training objective is not to deal with the challenge but to learn to respond to the failure (ST:OTS KM). We are going to force the participants to fail, even if in reality their actions would have been successful.

Other times we deliberately break with reality in training to prevent missing a training objective. In reality that "killer shot" that stopped your force from advancing is possible. It's real. But we paid millions and millions of dollars to practice executing logistics and intelligence support to an advance, not the kinetic engagement of the advance itself.

Often times when an OPFOR publicly complains about "letting the blue force win", they actually did something outside the referent, but within the response space of the simulation. So they pulled the scenario progress away from the intended objectives. On top of that failure, just because the simulation is useful for our training objectives doesn't mean something you can do with the simulation is realistic. Or that it necessarily isn't.

Not that this doesn't cut both ways. I once was running (analytic) simulations and a trained, qualified participant responded to a stimulus by pulling himself out of the situation (and walking out of the facility). The stimulus was plausible. Even likely. Highly likely. But that participant didn't have enough information about the entire situation to see it as such. A simulation providing a valid stimulus had a negative impact on the outcome.

McLaddie09 Jan 2023 7:35 p.m. PST

So if there is one reality, but an infinite number of models to represent any set of aspects, are you modeling reality or just your referent?

etotheipi:

I think we can agree that a simulation cannot encompass all of Reality, that one Big Thing.

However, that is never how 1. human beings experience Reality, deal with Reality, study Reality and master Reality. In every single case, the human experiences a limited number of what you call referents, nothing more.

That being the case, what humans experience of Reality can be captured in a simulation as those referents. The link between Reality and the simulation are those referents, what is called 'Realism.'

To deny that connection, insisting that there is only The Reality in total, and anything else isn't reality, simply isn't the way the world works for human beings and how they work with Reality.

A simulation is purely deterministic. You can't get anything out of it that is not put in.

Which is why it is important to get what goes in right. However, if the participants are free to make various decisions, their own determinations, then the simulation/wargame isn't purely deterministic.

I once was running (analytic) simulations and a trained, qualified participant responded to a stimulus by pulling himself out of the situation (and walking out of the facility). The stimulus was plausible. Even likely. Highly likely. But that participant didn't have enough information about the entire situation to see it as such. A simulation providing a valid stimulus had a negative impact on the outcome.

This is what happens with most hobby wargames. Not enough information and when the game doesn't 'match' their expectations/knowledge, they leave. What the participant expects and knows going in, will determine the success of the training or wargame. It is an important ingredient and of course without it, there is a negative outcome. The training fails because of a lack of referential information. Wargames fail for the same reason, only then they are called 'unrealistic' or just 'I don't like it.'

Often times when an OPFOR publicly complains about "letting the blue force win", they actually did something outside the referent, but within the response space of the simulation. So they pulled the scenario progress away from the intended objectives. On top of that failure, just because the simulation is useful for our training objectives doesn't mean something you can do with the simulation is realistic. Or that it necessarily isn't.

Yes, you forget the simulation objectives and screw with the the system, it probably will fail. As for the italicized sentence: Of course, it doesn't necessarily mean if you can do something in the simulation, it is realistic. Duh. You have to make that action in the simulation match the actions in reality. That takes work. What is the point of the training if what you do in the simulation has no application [matching referents…]in the real world?

Likewise, the Navy has "mass conflagration" damage control training is not intended to model reality. It's intended to give you a harder problem than you will ever face in reality, and one that is not possible to face in reality (because of the nature of the cascading casualties presented).

Like the example above, you have provided a lot of design issues, not some weakness of simulations in general. The damage control training is not intended to model reality. If it was designed that way, what is the issue visa vie intentionally modeling reality?

You have given examples of non-realistic modeling, messing with the simulation, Karate Kid sanding the floor--which could be simulated with the same psychic outcome, simulating failure to train responds to failure in reality, etc. etc.

None of those examples address what simulations can do if designed and used correctly in regards to capturing reality and how people experience that reality.

Wolfhag Supporting Member of TMP10 Jan 2023 6:44 a.m. PST

I'll admit that most of this is over my head.

However, I'd like to comment on:
This is what happens with most hobby wargames. Not enough information and when the game doesn't 'match' their expectations/knowledge, they leave.

I think that identifies why people will use some rules and not others and will modify the published rules they are using or write their own. If a particular rule does not meet the expectations of a player based on his knowledge and experience so he modifies the rule with a die roll modifier.
It happens quite frequently. Just look at all of the permutations of unit activation or IGYG and turn interruption rules and new versions of published rules.

I spent 6 months at the Marine Officers Basic School as an "Aggressor." This is where all new Marine LTs learn to be Platoon Leaders before going to their first duty station. Since this was during the VN era most of the training and tactics were designed for jungle warfare which was patrolling, ambush/counter-ambush, and assault scenarios.

Most of the training was "staged" so our job was to be ambushed or the ambusher, defend, or assault a position. The most fun was the meeting engagements when opposing patrols ran into each other in the woods with a maximum spotting range of 25-50 yards, like in a jungle. The side with the best Situational Awareness would spot or hear the "enemy" first and set up a hasty ambush without being detected. When ambushed, the unit had to rapidly execute its immediate action drill. The "event" would be over in a few minutes and then the instructors would get everyone together to give a critique.

It wasn't about a winner or loser. It was how rapidly and how well you reacted and evaluated the initial situation and then based on that evaluation gave the order to improve the situation.

In a meeting engagement like this the point man will have only seconds to evaluate the situation and give a pre-set arm/hand signal for a Hasty Ambush, Fall Back or Hunker Down and wait. If conditions permit, the Squad Leader comes up to the point man for a Sit Rep.

We all drilled this over and over again and it was not a game with a winner or loser, people designated as KIA, points, etc. The LTs would all take turns as the Platoon Leader and Squad Leader to get experience in those roles. That way you knew exactly what each member of the platoon was expected to do.

I guess you'd look at it as a training exercise. Did it train people to react quickly in a particular way without being ordered (or activated)? Yes. Was it a simulation of reality? Maybe. Reality would have been having an LT in charge of a squad that got ambushed with 50% KIA/WIA in the first 10 seconds and having to break contact without anyone being left behind. I'm pretty sure training does include those exercises now.

I think the ideal war game for training would be to have the students go up against the instructors. The instructors don't have a set force they just generate events or problems to test the student's knowledge and response. Overloading them with "unfair" and maybe unrealistic scenarios would be a way to stress test them under less-than-ideal conditions. Reality is not fair.

Wolfhag

McLaddie10 Jan 2023 10:42 a.m. PST

I think that identifies why people will use some rules and not others and will modify the published rules they are using or write their own. If a particular rule does not meet the expectations of a player based on his knowledge and experience so he modifies the rule with a die roll modifier.
It happens quite frequently.

Wolfhag:

I agree. Modifying published rules is part of the hobby. What is a problem is that 1. too often the players are dissatisfied because they don't know what the mechanics were meant to do and *think* the rules are wrong because they don't match past experience or expectations, and 2. then gamers change the rules for the wrong reasons. I've given several examples of that dynamic.

I guess you'd look at it as a training exercise. Did it train people to react quickly in a particular way without being ordered (or activated)? Yes. Was it a simulation of reality? Maybe.

It was a simulation of quite a bit. Many simulations aren't games with no winners. It does surprise me how often participants will invariably make the exercise into a win/lose game to some extent. I am sure you found that with the above exercise.

Reality would have been having an LT in charge of a squad that got ambushed with 50% KIA/WIA in the first 10 seconds and having to break contact without anyone being left behind. I'm pretty sure training does include those exercises now.

So, that is 'more' reality, but not all of it. That is the power of simulations. Nobody gets KIA/WIA.

I'll admit that most of this is over my head.

I doubt it. You captured most of the issues in your last post. etotheipi hesitates to say that simulations can model reality or be realistic because Reality is one big thing impossible to capture in any artificial system or process like your exercise. Any aspects of reality modeled are called 'referents' and are not deemed 'realistic' because they are such a small part of actual reality.

I say that we experience Reality, that impossibly big thing, as mere bits and pieces in our day-to-day experience and that is how we study reality: in bits and pieces. So, a simulation, only a small part of reality is still reality to be experienced and applicable [i.e. matches] with the wider world of experiences. That is why the exercises like you describe are 'useful,' and in limited terms realistic, even when not the whole shooting match. [Excuse the pun]

I think that the reality captured/modeled by simulations employed by scientists, educators and gamers etc. etc. etc. is continued to increase because of that useful relationship to reality is proven every day.

Pages: 1 2 

Sorry - only verified members can post on the forums.