JohnBrownsBussy2 [he/him]

  • 19 Posts
  • 587 Comments
Joined 1 year ago
cake
Cake day: March 24th, 2023

help-circle

  • Since console hardware is converging towards PCs (as opposed to specialized hardware stratified by make), as well as the expansion of the PC market (which has a massive range of hardware of varying price points and capabilities) the benefits of making a game against the limits of a specific console is less and less of a good idea versus targeting a wide market. If you aren’t stuck to exclusivity for a single console, then it makes sense to target previous generation consoles if possible in order to maximize the size of the potential market.



  • I know a lot of people recommend rules-light games for beginners, but if your group has neither roleplaying experience or theater experience, then something more structured and board-gamey may actually benefit. What I find is that players without any experience fall into choice paralysis in rules light games, and having clearer structures can facilitate learning. It really does depend on what sort of experience your players are interested in. If I had to make a blind recommendation, I think that the Free League “Year Zero Engine” games might be a good candidate if you’ve never played a TRPG before. They have the right balance of rules complexity for new players, good GM support and high production values. There are plenty of different genres (and degrees of complexity) in the ecosystem.

    Some examples that you may want to look at:

    • Mutant: Year Zero (post-apocalyptic adventure)
    • Dragonbane (fantasy adventure) (Technically not YZE, but it has similar levels of complexity)
    • Vaesen (mystery, folklore, horror)
    • ALIEN RPG (sci-fi, horror)
    • Tales from the Loop (coming of age, sci-fi adventure).

    Most of the these games have a starter kit with one-shot adventures that are meant to introduce players to the system and roleplaying more general.





  • I use diffusion models a fair bit for VTT assets for TTRPGs. I’ve used LLMs a little bit for suggesting solutions for coding problems, and I do want to use one to mass produce customized cover letter drafts for my upcoming job hunt.

    Neither model class is sufficiently competent for any zero-shot task yet, or at least has too high of a failure rate to run without active supervision.

    As for use in a socialist society, even the current version of the technology has some potential for helping with workers’ tasks’. Obviously, it would need to be rationed per its actual environmental and energy costs as opposed to the current underwriting by VCs. You’d also want to focus on specialized models for specific tasks, as opposed to less efficient generalized models.



  • The LLM is just summarizing/paraphrasing the top search results, and from these examples, doesn’t seem to be doing any self-evaluation using the LLM itself. Since this is for free and they’re pushing it out worldwide, I’m guessing the model they’re using is very lightweight, and probably couldn’t reliably evaluate results if even they prompted it to.

    As for model collapse, I’d caution buying too much into model collapse theory, since the paper that demonstrated it was with a very case study (a model purely and repeatedly trained on its own uncurated outputs over multiple model “generations”) that doesn’t really occur in foundation model training.

    I’ll also note that “AI” isn’t a homogenate. Generally, (transformer) models are trained at different scales, with smaller models being less capable but faster and more energy efficient, while larger flagship models are (at least, marketed as) more capable despite being slow, power- and data-hungry. Almost no models are trained in real-time “online” with direct input from users or the web, but rather with vast curated “offline” datasets by researchers/engineers. So, AI doesn’t get information directly from other AIs. Rather, model-trainers would use traditional scraping tools or partner APIs to download data, do whatever data curation and filtering they do, and they then train the models. Now, the trainers may not be able to filter out AI content, or they may intentional use AI systems to generate variations on their human-curated data (synthetic data) because they believe it will improve the robustness of the model.

    EDIT: Another way that models get dumber, is that when companies like OpenAI or Google debut their model, they’ll show off the full-scale, instruct-finetuned foundation model. However, since these monsters are incredibly expensive, they use these foundational models to train “distilled” models. For example, if you use ChatGPT (at least before GPT-4o), then you’re using either GPT3.5-Turbo (for free users), or GPT4-Turbo (for premium users). Google has recently debuted its own Gemini-Flash, which is the same concept. These distilled models are cheaper and faster, but also less capable (albeit potentially more capable than if you trained model from scratch at that reduced scale).