
"Fun with Grok 3 as a play testing partner & rules generator" Topic
7 Posts
All members in good standing are free to post here. Opinions expressed here are solely those of the posters, and have not been cleared with nor are they endorsed by The Miniatures Page.
Please remember that some of our members are children, and act appropriately.
For more information, see the TMP FAQ.
Back to the Fantasy Discussion Message Board Back to the Game Design Message Board
Areas of InterestGeneral Fantasy
Featured Hobby News Article
Featured Link
Top-Rated Ruleset
Featured Showcase Article
Featured Workbench Article The package arrives from Ukraine!
Featured Profile Article
|
| shawnzeppi2 | 21 Mar 2025 10:27 a.m. PST |
Anybody else try messing with AI's to design and/or play a miniatures type game recently? I found the state of the art (Grok 3 free version) is pretty amazing. What impressed me most was how intuitive it was at understanding a set of rules and playing it with a limited number of errors. link |
| shawnzeppi2 | 21 Mar 2025 10:27 a.m. PST |
Anybody else try messing with AI's to design and/or play a miniatures type game recently? I found the state of the art (Grok 3 free version) is pretty amazing. What impressed me most was how intuitive it was at understanding a set of rules and playing it with a limited number of errors. link |
etotheipi  | 21 Mar 2025 2:23 p.m. PST |
I have been running AI playtests on rules and scenarios before giving the games to human players for a while. When you write the AI, it is easier to tune it and avoid some of the errors you have. If you can scaffold your game, it might work better to start with a subset of the rules and scenario content then build in several steps. F'r'ex, I always start with basic movement by itself and run a couple thousand trials to see how it works without. Then maybe a small sandbox with a few units just beating the crap out of each other (no significant tactical or strategic maneuver). And then build from there. Working out the kinks with fewer things going on makes it easier to diagnose problems. Also makes it easier to suss out emergent problems (issues that only appear when multiple interactions influence each other). |
| shawnzeppi2 | 22 Mar 2025 9:31 a.m. PST |
Thanks for the response, etotheipi. Your training regimen makes a lot of sense, even for a non-AI/non-programmer type. And the training process for game play sounds interesing. But I am coming at this from a user's perspective, not an AI trainer's. I was simply ~testing the state-of-the-are free AI tool, (from a reasoning standpoint's) ability to design and play a typical miniatures game with at least semi-interesting rules. And since I don't have my own AI running on my own servers, I'm using what's on the Internet as a Service, like most people playing around with it. And from that perspective, I can't dumb down the game anymore, or it would not be fun to play. But as I note in the blog, these tools are improving at a high clip (based on my own experience and standard industry test metrics), so in the near future I expect to have a smart non-human opponent available for any simple minis systems I upload… Maybe not just yet? |
etotheipi  | 22 Mar 2025 5:36 p.m. PST |
If you're going to use AI tools, you really need to learn about them and how to use them. I mean, you can just go buy an industrial chainsaw and use it with no training or investigation … just light it off and go. Yesterday, I was discussing use of AI to write English papers in high school with an educator. Here's what ChatGPT had to say about the topic: Encouraging students to use AI without teaching them how it works, its limitations, and how to critically evaluate its outputs is essentially training them to outsource thinking rather than develop it.Key Issues with This Approach: Lack of Meta-Understanding If students don't engage with the process of research, writing, or problem-solving themselves, they won't develop the deeper understanding needed to assess AI-generated content. Example: A student who never learns how to structure an argument may not recognize when AI-generated text lacks logical coherence. Inability to Spot Errors or Biases AI tools often generate plausible-sounding but incorrect information. Without a foundation in the subject, students won't know when something is wrong. Example: AI might generate a perfectly structured historical essay but misattribute key events. If students never learned fact-checking, they'll accept errors as truth. Erosion of Critical Thinking If students rely on AI to generate answers instead of grappling with the material, they won't develop the ability to evaluate information critically. This is particularly dangerous in subjects where nuance and interpretation matter (e.g., history, literature, ethics). Misuse and Over-Reliance Without guidance, students may use AI to replace thinking rather than enhance it. Example: Instead of using AI to refine an argument, they might just copy whatever it generates without understanding it. A Better Approach: AI as a Teaching Aid, Not a Shortcut Instead of just throwing AI at students, schools should: Teach how AI generates responses (pattern recognition, probability-based text generation, limitations). Emphasize verification skills (cross-checking sources, recognizing hallucinations). Assign tasks that require AI use but also require students to critique or refine what AI provides. Encourage AI as a tool for augmentation, not replacement—e.g., brainstorming ideas, refining structure, or improving clarity. Bottom Line Using AI without proper education on its use degrades learning rather than enhancing it. If students don't build their own analytical muscles, they won't recognize when AI is leading them astray. [Redacted] is right to push back.
|
| shawnzeppi2 | 22 Mar 2025 7:52 p.m. PST |
Thanks. I was investigating if state-of-the-art free AI could generate and play a simple game with random elements. Either it can 1) Do that perfectly 2) Not do that at all, or 3) Do that with some number of errors. Turns out the latter is where we are at. A user cannot train it (of course that was never my goal). A user CAN look at its moves and actions, validate that they are IAW the rules, and see if they are clever. And if not clever, or not IAW the rules, make suggestions to the AI to fix or improve them. Maybe not when playing a game as complicated as GO, but certainly when you are trying to play a simple minis like game with simulated dice rolling. I will try the exact same test using the same inputs in a few months and see if the errors and strategies improve. That's it. |
etotheipi  | 23 Mar 2025 5:24 a.m. PST |
That's cool that's where you are looking. One of the issues is that saying "Can an AI do a task?" is like coming here and asking "I have a wargame, do you want to play?" There are probably relatively few people here who would say that they would play a "wargame" without further specificity. Even fewer who would actually mean it. Like wargames, there are many categories of AI and many ways to categorize AI. Then, like a set of rules, the AI. For a wargame this is a collaboration between the writers of the rules and the people running the event. For an AI, this is a collaboration between the developers and the user. The more tools (like prompt engineering or probability analysis) you have to execute the application (wargame or AI), the more different the outcomes will be. Hopefully, better. |
|