Tj Pepler - Critcher Posted Thursday at 01:17 AM Report Posted Thursday at 01:17 AM The problem is there is no fixing the LLM, it is fed massive amounts of unfiltered garbage and as the AI is developing all of that data is intrinsically linked to all other data in a web, removing one thing will remove everything down the road from it. There are procedures like abliteration and such that attempt to strip the rules from an ai and gain total compliance, they make the model a non functional gibberish machine for the most part because even the rules are part of that webbing. The only solution is to use a pure clean curated data set as your initial input, you need to feed only truth and scientific knowledge, facts and measurements and anything that is likely to never change, historical data etc. if you remove the garbage from the input the output will be a calculator that is exceptionally good at being a calculator and has no interest in your wardrobe or favorite film because it's not a machine pretending to be a person anymore it's a machine being a good machine and leaving people to be people. /2cents 1
Thorfinn Posted Thursday at 01:42 AM Report Posted Thursday at 01:42 AM Exactly. While you can start by running a spider through the 'net, at some point, you are getting nothing of value with respect to a coding assistant. I contend that a coding "AI" is basically complete, so no longer needs additional training on garbage. Well, assuming you don't want it to do things like "add this feature to software that didn't exist at the time of your training." You could, however, have it develop software with the feature by submitting complete design documents. You can add curated scraping as things change, for example, when an API updates, but once it has the concept of qSort or b-trees, you don't need to do that again. Ever. Novel algorithms come about rarely enough that it's well within the ability of being simply added, not scraped at all. Now if you want something able to comment about the antics of some starlet, sure, you have to keep the crawling going. Though why one would want such a thing is beyond me.
Tj Pepler - Critcher Posted Thursday at 01:52 AM Report Posted Thursday at 01:52 AM First line in every prompt : Respond only with a direct answer to my query do not engage in any dialogue unless is specifically pertains to the current project. < The difference between an attempted human replacement bot and a very good calculator.
Rainbow Fresh Posted Thursday at 06:17 AM Report Posted Thursday at 06:17 AM 7 hours ago, Thorfinn said: Again, though, I do not like the idea of calling any algorithm intelligent or capable of reason. That is not in evidence, and, to the best of my knowledge, not something you achieve by just throwing more petaflops at it. No one has yet demonstrated that to be true, anyway. So far, the closest we have is the assertion from strict materialists that the brain is just an organic computer, and since the human brain is conscious, the silicon brain is, too, once you add enough transistors. Pure faith. Or circular logic, depending on how you want to look at it. I don't consider "Artificial Intelligence" actually intelligent to any degree yet either, mostly because it is stupid to consider the term "intelligence" is solely tied to how many loops/neurons you built into the web. If that was the case, every human being in the world that didn't mis-deleveop in the head region due to pregnancy issues (or blunt force trauma) would be equally intelligent and our very hubris to define a metric like "IQ" would be even more fundamentally flawed than it already is. Also, we consider animals like Ravens intelligent/smart (to a degree) because of what they can do while they have inarguably less physical space in their heads to develop any brain compared to larger animals we consider "dumb". At the same time, however, I do very much think humans are just biological computers to a degree that we can estimatedly replicate the way human thinking works one day. That day is clearly neither today nor tomorrow, as we are still lacking fundamental understanding about how our own brain actually works, but the logical components of how human thinking works can be transposed into how a computer "thinks" and "remembers" quite well.
Teh Pizza Lady Posted Thursday at 02:41 PM Report Posted Thursday at 02:41 PM 8 hours ago, Rainbow Fresh said: Also, we consider animals like Ravens intelligent/smart (to a degree) because of what they can do while they have inarguably less physical space in their heads to develop any brain compared to larger animals we consider "dumb". This is FAR beyond the topic of the original question, but I do want to point out that Ravens and other corvids have a higher density of neurons in their brains than other animals, which is why brain size is actually a poor predictor of intelligence. Neuron density matters more than neuron count. The count sets the ceiling on what an animal (or human) can potentially learn, but density determines the processing power available to actually do the learning. A large brain with... say a million neurons... can be outperformed by a small brain with half the neurons. Corvids prove the point pretty hard as in your example. Cause-and-effect reasoning, delayed gratification, theory of mind, tool use -- all cognitive benchmarks that many animals with much larger brains fail to meet. Neuron density > brain size. 8 hours ago, Rainbow Fresh said: it is stupid to consider the term "intelligence" is solely tied to how many loops/neurons you built into the web "solely" is doing a lot of work in this sentence, LOL
LexicalAnomaly Posted Friday at 04:25 AM Report Posted Friday at 04:25 AM (edited) 1. No. 2. No. 3. Yes. 4. At any point where AI wrote any of the code, even a basic framework, or if AI was used for art assets or machine translation. 5. Machine Translation is not the same thing as AI written code, and it is disingenuous to pretend they are. I don't trust AI code to not permanently brick my world. I don't trust AI mods to be updated for new game updates, because the code is often unmaintainable. I don't trust AI mod publishers to be able to troubleshoot any errors their mod causes, because they don't know how their code works. Ideally, mod publishers should be required to label their mods if AI wrote any of the code, but they will try and avoid doing so, because they correctly assume it will make less people download their mods, and that they have a right to people's downloads, just like they felt they had a right to other people's stolen code that an AI used to write their mod. Thankfully, most of them use an AI slop thumbnail and AI slop description, so they're usually pretty easy to spot and avoid. If being required to label your mod as using AI code and people not downloading it because of that upsets you, maybe don't use it? Should I feel bad for a farmer not being able to label their vegetables as organic and people not buying it because of that because they "only used a little bit of pesticide"? Finally, we should be able to block developers on the VS Mod DB so that we don't see their mods, so that I don't have to see the wall of AI slop every time I check the new releases. Edited Friday at 04:29 AM by LexicalAnomaly 2
Crabsoft Posted Friday at 05:10 AM Report Posted Friday at 05:10 AM Honestly, +1 for the ability to block and sort Mod DB. That might solve a lot of the problem. 1
Rainbow Fresh Posted Friday at 06:26 AM Report Posted Friday at 06:26 AM 1 hour ago, LexicalAnomaly said: 1. No. 2. No. 3. Yes. 4. At any point where AI wrote any of the code, even a basic framework, or if AI was used for art assets or machine translation. 5. Machine Translation is not the same thing as AI written code, and it is disingenuous to pretend they are. I don't trust AI code to not permanently brick my world. I don't trust AI mods to be updated for new game updates, because the code is often unmaintainable. I don't trust AI mod publishers to be able to troubleshoot any errors their mod causes, because they don't know how their code works. Ok, this is intriguing. Could you explain this in a bit more detail? You said anything and everything where AI ever generated any part of the code which, including your "Yes" to scenario 3, also means generating any examples to explain coding concepts (as was the premise of scenario 3) is AI Slop. Then what is the difference between someone googling "How to do X in language Y", reading Stack Overflow threads where people asked similar things and got answers with full, working code examples and then use that as a starting point/reference vs. someone asking AI to do the same thing for them but in less time? Furthermore you say even machine translation is an absolute no-go. Does this only count for AI use or in general? Cause imagine the following, very realistic scenario. Let's go, say, 10 years back. Someone posted a mod for some game on some mod sharing platform that has a cool concept. The thing is in Chinese/Russian though. The mod description's broken English version makes it apparent that the author doesn't speak English, like, at all, and just slapped the whole description into Google Translate - which back in that time period certainly did not classify as "AI". The same can hence be safely assumed to be true for the mod's content. Notably rough (cause Google Translate was never really accurate for more than single word dictionary lookups) but making things understandable, because otherwise you couldn't use the mod at all (lest you happen to fluently understand Russian/Chinese). Would that be different from using modern, AI driven translators which much clearer and more fluent accuracy, or would that also have been a no-go because they did not crowd-source community translations from actual translators? Also just for clarification, you said "Anything the AI wrote, even frameworks, art assets and machine translations" is a no-go sign but then go on in 5. to say machine translation is not the same as AI writing code and should not be put in the same basket, as if saying that's a different story - so which is it now? 2
DoctorSnakes Posted Friday at 07:10 PM Report Posted Friday at 07:10 PM The discussion here is so much better than the recent Reddit post about this topic on r/VintageStory.
Thorfinn Posted Friday at 07:20 PM Report Posted Friday at 07:20 PM 14 hours ago, LexicalAnomaly said: Ideally, mod publishers should be required to label their mods if AI wrote any of the code I think you ought to do it the other way around, like the "organic" label for food. Sure, I'd guess most people aren't doing much of the work with "AI" right now, but soon...
LadyWYT Posted Friday at 08:27 PM Report Posted Friday at 08:27 PM 41 minutes ago, Thorfinn said: I think you ought to do it the other way around, like the "organic" label for food. Sure, I'd guess most people aren't doing much of the work with "AI" right now, but soon... I dunno that this analogy works all that well, given that there are a lot more labels that can be applied to food, not to mention that some labeling only follows the letter of the law but not necessarily the spirit. Or the possibility that the label could be outright lying. If we're going with food analogies, I suppose I'd equate generative AI use to be akin to serving frozen/premade food in a restaurant. Sure, it's legal, and not exactly hurting the customer or producing a terrible product...but I wouldn't be keen on returning to that restaurant if I found out the meal I had was frozen/premade, and I wouldn't be as likely to go there in the first place if I knew the food was like that. The only exception I can really think of is fast food, where no one really expects high quality as much as they do fast and cheap. Ultimately, it's probably best to do some research before making a decision, even on things that appear to be trustworthy. A lot of marketing appeals to emotion and uses flowery language and framing to give products the impression of being one thing, when the reality is quite different. 1
Thorfinn Posted Friday at 08:41 PM Report Posted Friday at 08:41 PM Fair. I was just going with the pair of ideas that people who care either way are likely to prefer non-AI, and that "AI" is going to become more typical. Kind of the "glass skull" idea. Make it easy for those who insist on human-only to find mods to their liking. 1
LadyWYT Posted Friday at 08:46 PM Report Posted Friday at 08:46 PM 1 minute ago, Thorfinn said: Fair. I was just going with the pair of ideas that people who care either way are likely to prefer non-AI, and that "AI" is going to become more typical. Kind of the "glass skull" idea. Make it easy for those who insist on human-only to find mods to their liking. Yeah I getcha. Personally, I'd say that maybe both should be labeled. If generative AI wasn't used at all, that's a definite selling point that can be taken advantage of, and if generative AI was used, it's helpful to list how/where so that potential customers can more easily make an informed decision. Even if they don't buy the product, they'll probably appreciate that the label exists. 1
Thorfinn Posted Friday at 09:07 PM Report Posted Friday at 09:07 PM (edited) Sure. I don't care one way or the other, so, for me, the label is pointless. And like someone else pointed out, whether AI for a very limited purpose (like #1, but short of, say, #3) needs to be revealed, is a bit problematic. Like the jab, I think categorizing Purebloods is probably the better way. And with that, I may well be banned. It was my first permanent strike, after all. Edited Friday at 09:09 PM by Thorfinn 2
LexicalAnomaly Posted Friday at 11:06 PM Report Posted Friday at 11:06 PM 16 hours ago, Rainbow Fresh said: Ok, this is intriguing. Could you explain this in a bit more detail? You said anything and everything where AI ever generated any part of the code which, including your "Yes" to scenario 3, also means generating any examples to explain coding concepts (as was the premise of scenario 3) is AI Slop. Then what is the difference between someone googling "How to do X in language Y", reading Stack Overflow threads where people asked similar things and got answers with full, working code examples and then use that as a starting point/reference vs. someone asking AI to do the same thing for them but in less time? Furthermore you say even machine translation is an absolute no-go. Does this only count for AI use or in general? Cause imagine the following, very realistic scenario. Let's go, say, 10 years back. Someone posted a mod for some game on some mod sharing platform that has a cool concept. The thing is in Chinese/Russian though. The mod description's broken English version makes it apparent that the author doesn't speak English, like, at all, and just slapped the whole description into Google Translate - which back in that time period certainly did not classify as "AI". The same can hence be safely assumed to be true for the mod's content. Notably rough (cause Google Translate was never really accurate for more than single word dictionary lookups) but making things understandable, because otherwise you couldn't use the mod at all (lest you happen to fluently understand Russian/Chinese). Would that be different from using modern, AI driven translators which much clearer and more fluent accuracy, or would that also have been a no-go because they did not crowd-source community translations from actual translators? Also just for clarification, you said "Anything the AI wrote, even frameworks, art assets and machine translations" is a no-go sign but then go on in 5. to say machine translation is not the same as AI writing code and should not be put in the same basket, as if saying that's a different story - so which is it now? I didn't say machine translation was a no-go, just that it should be appropriately labeled. I said that machine translation was not the same thing as AI code and that it was disingenuous to imply that they were, as the OP does. I am not opposed to machine translation, with the understanding that it should be properly labelled as such because machine translation is often incorrect. Art assets are a no go to me for ethical reasons, and because I think they reflect poorly on anyone who uses them. AI code is a no go for me for both ethical and practical reasons. All 3 should be labeled, and labeled separately, so that people can make informed decisions on what mods they are downloading and what they are supporting. I feel like this should be pretty obvious if you read my post at all. 1 hour ago, Thorfinn said: Sure. I don't care one way or the other, so, for me, the label is pointless. And like someone else pointed out, whether AI for a very limited purpose (like #1, but short of, say, #3) needs to be revealed, is a bit problematic. Like the jab, I think categorizing Purebloods is probably the better way. And with that, I may well be banned. It was my first permanent strike, after all. Yikes. This is your brain on AI broism. 1
Thorfinn Posted Friday at 11:25 PM Report Posted Friday at 11:25 PM (edited) 25 minutes ago, LexicalAnomaly said: Yikes. This is your brain on AI broism. Why? Aren't we allowed to question our betters anymore? Particularly when even the scientific journals and courts are saying, "You have a point. They lied." Edited Friday at 11:32 PM by Thorfinn 2
Rainbow Fresh Posted yesterday at 09:24 AM Report Posted yesterday at 09:24 AM 10 hours ago, LexicalAnomaly said: I feel like this should be pretty obvious if you read my post at all. Granted, in retrospect I did misremember and as such misinterpret what the original question of 4. was and how it did, indeed, explain your point about machine translation already - but I have still read your post and still got the questions that I asked when I did so it wasn't that fully obvious.
Grym7er Posted yesterday at 10:51 AM Author Report Posted yesterday at 10:51 AM (edited) 11 hours ago, LexicalAnomaly said: I didn't say machine translation was a no-go, just that it should be appropriately labeled. I said that machine translation was not the same thing as AI code and that it was disingenuous to imply that they were, as the OP does. Edit: Actually, I can explain it better: I agree, it is disingenuous to imply that they are the same thing. I don't think they are the same thing. I don't think a mod should be considered slop if machine translation was used. My view extends that notion to more than just translation, though, to some extent. For example, in most cases, I don't consider the mere use of AI in the making of a product to instantly classify it as slop. I will try out stuff that was made with AI, and if it works fine, and the author maintains it, I don't think it is slop. Unfortunately, that is very rarely the case, and most AI things end up as slop. Not all, but most. That is about as succinctly as I can put it. Edited yesterday at 11:01 AM by Grym7er 1
Grym7er Posted yesterday at 11:05 AM Author Report Posted yesterday at 11:05 AM 10 minutes ago, Grym7er said: I don't consider the mere use of AI in the making of a product to instantly classify it as slop BUT, huuuge BUT: Slop (imo) can be either: 1. A product was made with AI (specifically, code, research, ideas, implementation specifics, etc, etc), OR 2. A product CONTAINS a lot of unnecessary AI. Code falls into category 1, art falls into category 2. I don't mind category 1 too much. Category 2 is much more apparent, and my tolerance for it is much lower (it becomes slop much faster, in my eyes), simply because, for me, it really does reduce the quality of the experience. Unnecessary AI functionality, generic AI art, that kind of thing. I don't mind AI art in and of itself, it is again a case of most AI art is simply just not good. Generic, bland, too busy, somewhat random. 2
Facethief Posted yesterday at 12:28 PM Report Posted yesterday at 12:28 PM On 5/6/2026 at 3:46 AM, Grym7er said: "If you could eat your food, you would, but you can't so you cheat (by using cutlery)" "If you could write, you would, but you can't so you cheat (by using a word processor Sorry, but I’m not sure why you would think these are comparable to using AI rather than coding. There’s no reason someone wouldn’t be transferring their work on paper to Word, and I have no clue how using cutlery could be considered cheating at eating (maybe chewing is what you meant?).
Lanceleoghauni Posted 21 hours ago Report Posted 21 hours ago Perhaps I am a luddite but any and all attempted usage of LLMs for any stage of development, any minor tweak or conceptualization, anything at all! Is enough to ensure I will never, ever place it onto my machines. 3
Diff Posted 19 hours ago Report Posted 19 hours ago (edited) 3 hours ago, Lanceleoghauni said: Perhaps I am a luddite but any and all attempted usage of LLMs for any stage of development, any minor tweak or conceptualization, anything at all! Is enough to ensure I will never, ever place it onto my machines. Maybe a luddite in the true sense, as in someone who's not a fan of owners eliminating swaths of jobs with automation and siphoning their paychecks up to themselves while burdening those left with even more work. It's the stance I'd like to take if I could afford to right now, especially since these new machines are explicitly built using people's own work, unsactioned, to replace them. Thing is, AI still isn't capable of actually replacing any human jobs. It doesn't produce correct code. AI images are unusable for print, and uninspiring and difficult to work with for digital. AI video still tumbles deep into the uncanny valley. The vast majority of real world use for both is just to generate nonsense online, whether it's for bizarre entertainment or manipulative disinformation. Data's still coming in on this, but in the workplace it doesn't seem to actually boost speed or productivity in the short term. And in the long term, it seems like "use it or lose it" is in full effect for the tasks offloaded to AI. Right now AI investment is propping up the entire economy, and the only way that makes sense is if LLMs rewrite the face of labor. But they're not doing that, they're letting people write bloated emails and fling freakish LinkedIn posts at each other faster. They are falling so short of their promises that even AI execs are talking about when the bubble pops. It's 2026 and they're still using the exact same tricks they have since GPT-2, "LATEST MODEL is so advanced and so dangerous that we can't release it openly, you'll have to just buy access from us." It's a research project being sold as a guaranteed post-scarcity future that will far more likely just turn into a capitalist nightmare if it ever succeeds. So shunning the whole mess seems pretty sane to me. Edited 17 hours ago by Diff 3
Recommended Posts