Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

still cis tho~

5
Posts
8
Following
A member registered Dec 24, 2022

Recent community posts

It seems someone in the moderation chain has responded to such a post, linked below.
https://itch.freezing.top/post/14441818

(3 edits)

Might I ask where the appropriate place is to reach out regarding this problem? You say it's a well-known issue, but that doesn't necessarily imply the people who have the ability to do something about it are aware. In the event they are not already aware of it, where would the appropriate place be to report this persistent issue, so it may fall upon the eyes of someone who can address it?

(The below is not me directing my frustration at you, the moderator, I fully understand and respect that you are unable to directly address this problem.) 

edit to clarify a potential misinterpretation/misunderstanding of the question: I do not believe the standard support ticketing system would be a suitable place, unless that  system has the ability for a ticket to be escalated to a team capable of making changes platform-wide, or something similar that doesn't involve me manually assembling the web of bots(described as a "web" because I have noticed that with at least one of the bots' campaigns, they have a tendency to have a copy of this ""game"" uploaded themself, but their links always point to another bot's listing of that same game, who's posts point to another, with this pattern repeating, likely to evade the suspension of an  account, rendering all of it's posts "useless", given you can still selectively choose to show posts from a suspended account, which appears to have been successful to some degree) into a list to be each handled individually, as any other normal "report" would be handled, as this issue will likely continue happening until the organizer gives up, or something at platform-level(backend, frontend, wherever it would be best suited, I don't know the design of the relevant system(s) affected by this) is changed to combat this behavior, or at least flag it for manual review.

Additional edit following my revisiting of an old reported comment: It appears my assumption that experiences uploaded by these bots would be automatically removed, or at least hidden from the platform in the event their creator's account was suspended is false, which means that in order to successfully purge a comment from one of those bots, each report for each comment would require at least four separate components be included in the report. 1: The comment itself. 2: the version of the experience uploaded to the platform by the account who posted the comment. 3: the profile of the creator of the experience linked in the comment. and 4: The linked experience itself. (For good measure, one might want to tack on a link to the profile that created the comment, in the event actioning component #1 doesn't result in this being removed). The odds of fully succeeding in getting all relevant parts of this single comment removed, decrease with each additional component, and additionally, in my experience I have had multiple platforms across the internet completely ignore things that I opted to report en masse, be those things all in one "report", or a series of consecutive individual reports. This has resulted in all of the reports being ignored entirely, far more often that I would like, despite the reports being handled fine when submitted individually.

(2 edits)

Normally my expectations of such a large platform when it comes to performing basic automoderation/flagging of pattern-matching content would be higher, but given they're still dealing with the well-deserved blowback from their recent deindexing stunt, and the fact that these comments are doing little to zero damage to the one thing they(itch) care about:(which is money), I'm not at all surprised. There are multiple behavioral patterns that line up from spam account to spam account, it would take minimal effort to find probably at LEAST 25 accounts created approximately 8 days ago all around the same exact time, with  similarly created names, the same exact "game" which were all updated 2 days ago, with the same comment pattern. Would this be a temporary solution that the spammers would work around? Yes, but frankly that excuse is not acceptable, as doing nothing about the problem, and refusing to acknowledge the problem even exists, shows that you as a company simply do not care.

Edit: After my by no means exhaustive search of recent support posts, I have yet to find a single one relating to this problem, have a response from a moderator, furthering my point about "refusing to acknowledge the problem even exists". If someone happens to know of one where a moderator(or higher) has actually replied to one of these with something, feel free to correct me and link it if possible.

rant over, sorry, I'm just sick and tired of so many large platforms failing to perform this seemingly basic task. Google for example, could put this """"extremely advanced AI"""" that they continue to kill the planet with, to tackle this problem on Youtube, but they haven't. 

This claim is false. See https://itch.freezing.top/post/11903836/edit

Well, this is very likely a case inside that  .1% where it's wrong.
Hi, I'm a person on the internet who spends their free time learning, testing, and sometimes breaking computers and software. I consider myself to be plenty qualified in saying that this is most likely(99.99999%, if I'm wrong I will quite literally eat my shoe.) a false positive, for the following reasons

1: Those detected URL's are schema resources that tell Windows the information it needs to know about the structure of the application and it's data.
2: Those IP addresses, while technically valid, are VERY  unlikely to actually be used as such, and are more likely to be version numbers, or something similar. Addresses ending in .0 have a tendency to break things, and the address of "1.0.0.0" belongs to Cloudflare, who is responsible for per their own data, handling 16 % of all internet traffic. The odds of this being a malicious host, are virtually nonexistent. The other address, belongs to the US Army(specifically NETCOM), and has belonged to them for 30 years. The odds of THIS being a malicious host, are also virtually nonexistent
3: That "telegram bot token" is largely comprised of the same repeating number. If someone can generate a valid bot token with the same number repeated that many times, and prove to me that it is valid, I will eat my OTHER shoe.

It is for these reasons, I call your claim complete and utter BS. Please seek input from others before accusing a developer of uploading malware.