• 1 Post
  • 86 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle



  • While the result from generating an image through AI is not meant to be “factually” accurate, its seeking to be as accurate as possible when it comes to matching the prompt that is provided. And the prompt “1943 German Soldier” or “US Senator from the 1800” or “Emperor of China” has some implications in what kind of images would be expected and which kinds wouldn’t. Just like how you wouldn’t expect a lightsaber when asking for “medieval swords”.

    I’m not convinced that attempting to “balance a biased training dataset” in the way that this is apparently being done is really attainable or worthwhile.

    An AI can only work based on biases, and it’s impossible to correct/balance the dataset without just introducing a different bias. Because the model is just a collection of biases that discriminate between how different descriptions relate to pictures. If there was no bias for the AI to rely on, they would not be able to pick anything to show.

    For example, the AI does not know whether the word “Soldier” really corresponds to someone dressed like in the picture, it’s just biased to expect that. It can’t tell whether an actual soldier might just be wearing pajamas or whether someone dressed in those uniforms might not be an actual soldier.

    Describing a picture is, on itself, an exercise of assumptions, biases, appearances that are just based on pre-conceived notions of what are our expectations when comparing the picture to our own reality. So the AI needs to show whatever corresponds to those biases in order to match as accuratelly as possible our biased expectations for what those descriptions mean.

    If the dataset is complete enough, and yet it’s biased to show predominantly a particular gender or ethnicity when asking for “1943 German Soldier” because that happens to be the most common image of what a “1943 German Soldier” is, but you want a different ethnicity or gender, then add that ethnicity/gender to the prompt (like you said in the first point), instead supporting the idea of having the developers force diversity into the results in a direction that contradicts the dataset just because the results aren’t politically correct. …it would be more honest to add a disclaimer and still show the result as it is, instead of manipulating it in a direction that activelly pushes the IA to hallucinate.

    Alternativelly: expand your dataset with more valuable data in a direction that does not contradict reality (eg. introduce more pictures of soldiers of different ethnics from situations that actually are found in our reality). You’ll be altering the data, but you would be doing it without distorting the bias unrealistically, since they would be examples grounded in reality.


  • The packager always should “explicitly require” what are the dependencies in a Nix package… it’s not like it’s a choice, if there are missing dependencies then that’d be a bug.

    If the package is not declaring its dependencies properly then it might not run properly in NixOS, since there are no “system libraries” in that OS other than the ones that were installed from Nix packages.

    And one of its advantages over AppImages is that instead of bundling everything together causing redundancies and inefficient use of resources, you actually have shared libraries with Nix (not the system ones, but Nix dependencies). If you have multiple AppImages that bundle the same libraries you can end up having the exact same version of the library installed multiple times (or loaded in memory, when running). Appimages do not scale, you would be wasting a lot of resources if you were to make heavy use of them, whereas with Nix you can run an entire OS built with Nix packages.




  • Flatpak still depends on runtimes though, I have a few different runtimes I had to install just because of one or two flatpaks that required them (like for example I have both the gnome and kde flatpak runtimes, despite not running either of those desktop environments)… and they can depend on specific versions of runtimes too! I remember one time flatpak recommended me to uninstall one flatpak program I had because it depended on a deprecated runtime that was no longer supported.

    Also, some flatpaks can depend on another flatpak, like how for Godot they are preparing a “parent” flatpak (I don’t remember the terminology) that godot games can depend on in order to reduce redundancies when having multiple godot games installed.

    Because of those things, you are still likely to require a flatpak remote configured and an internet connection when you install a flatpak. It’s not really a fully self contained thing.

    Appimages are more self contained… but even those might make assumptions on what libraries the system might have, which makes them not as universal as they might seem. That or the file needs to be really big, unnecessarily so. Usually, a combination or compromise between both problems, at the discretion of the dev doing the packaging.

    The advantage with Nix is that it’s more efficient with the users space (because it makes sure you don’t get the exact same version of a library installed twice), while making it impossible to have a dependency conflict regardless of how old or new is what you wanna install (which is something the package manager from your typical distro can’t do).


  • Were the earlier series not focused on shared values to more or less a similar extent too?
    Kirk has usually been given the reputation of being a rule-breaker, often ignoring Starfleet rules when they are in conflict with his values. Even off-camera (in DS9 I think) they attribute him 17 temporal violations, and I think he has been accused of violating the prime directive multiple times.


  • Ferk@kbin.socialtoProgrammer Humor@programming.devWhitespace
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    8 months ago

    But C syntax clearly hints to int *p being the expected format.

    Otherwise you would only need to do int* p, q to declare two pointers… however doing that only declares p as pointer. You are actually required to type * in front of each variable name intended to hold a pointer in the declaration: int *p, *q;


  • I feel it’s a balance. Each operation has a purpose.

    Rebasing makes sense when you are working in a feature branch together with other people so you rebase your own commits to keep the feature branch lean before you finally merge it into the main branch, instead of polluting the history with a hard to follow mess of sub branches for each person. Or when you yourself ended up needing to rewrite (or squash) some commits to clean up / reorganize related changes for the same feature. Or when you already committed something locally without realizing you were not on sync with the latest version of a remote branch you are working on and you don’t wanna have it as a 1-single-commit branch that has to be merged.

    Squashing with git merge --squash is also very situational… ideally you wouldn’t need it if your commits are not messy/tiny/redundant enough that combining them together makes it better.




  • Ferk@kbin.socialtoProgrammer Humor@programming.devifn't
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    9 months ago

    Yes… how is “reducing exclamation marks” a good thing when you do it by adding a ' (not to be confused with , ´,or’` …which are all different characters).

    Does this rely on the assumption that everyone uses a US QWERTY keyboard where ! happens to be slightly more inconvenient than typing '?


  • I’m not convinced that the gacha model works for every demographic. And even if it did, I’m sure it’s much harder to be successful selling that kind of crap as an independent studio with no prior experience doing that. Maybe exploiting the D&D / Forgotten Realms franchise would have helped… but after the OGL fiasco (which is a good example of how profit was affected negatively when D&D fans cancelled their D&D Beyond subscriptions on the wake of new plans for monetization by WOTC) I’m not really convinced the game would have made as much money as they can with this different focus.

    Reputation also affects profits. And long term, I’m convinced Larian approach will prove to be more profitable than it would have been had they chosen to enter the wide and unforgiving world of competing RPG gacha games by introducing “yet another one” in a market that is increasingly tight, and with a public that is getting more and more tired of it.

    Yeah, Diablo Immortal / 4 or probably even Fallout 76 made money with those tactics… but I don’t believe those profits are gonna last that long, or reach an overall total as high as could have been when you think long term. They have managed to get a lot of people to stop caring about those franchises, so I’d argue they are actually burning down their golden goose just for a short big burst of cash, instead of actually maximizing the profit they could have made from the goose had they been taking care of it while steadily producing golden eggs people actually wanna buy…


  • Even when you care about a product, at the end of the day you still have to put a price tag on it, and you’ll still have to give fair shares to all the people who worked on it, while saving up as much as you can to invest in more well cared products… without making it so expensive that not enough customers will buy it.

    Caring about the product, investing on it and producing something that is actually good and that people place in high value (so they are willing to pay more for it) is not incompatible with maximizing profit. In fact, I wouldn’t be surprised if Larian is profitting quite a bit from all the good publicity (imho, well deserved) they are getting for not having gone down the road of predatory monetization tactics.
    Probably they would not have been as successful if they had. So I’d argue they are maximizing profits in the best way an independent game studio can.
    Choosing to not participate in Subscription services at the moment is likely also in their best interest, profit-wise. Particularly at this point and with this momentum they are having.



  • Apparently, this article is talking about the “Legacy CS:GO Version” that was available (even after the CS2 launch) for devices that were unable to run CS2. It seems that was less than 1% of CS:GO players, so they are ending support for it, even though they claim it should still be available with reduced compatibility.

    I think anyone can switch to this version in the “Beta” tab of the properties window for CS2 by selecting “csgo_legacy”.

    What is the legacy version of CS:GO?

    The legacy version of CS:GO is a frozen build of CS:GO. It has all of the features of CS:GO except for official matchmaking.

    What will happen after the end of support for the legacy version of CS:GO?

    After January 1, 2024 the game will still be available, but certain functionality that relies on compatibility with the Game Coordinator (e.g., access to inventory) may degrade and/or fail.



  • Many hosts allow you to set rules to protect branches from getting their commits removed in the remote (in fact, I think that’s the default for gitlab main branches) or to prevent people from pushing their commits to them directly.
    I expect even “the main branch has to stay more or less in sync with origin/main” can be automated… though it might not be what you always want, depending how you work.