Diff Posted August 14, 2025 Report Posted August 14, 2025 (edited) 1 hour ago, Thorfinn said: People who should know better, since there's no real risk of AI ever being serious competition... The problem isn't actually being replaced, the concern at the table is management being taken in by the hype. It's easy to believe the big smart tech men (who totally could have been physicists, it's really important to them that you know they could have done theoretical physics) who extol its virtues to anyone with cash in their pockets. But the risk of AI here isn't even AI. It's management thinking AI can replace jobs that they don't even fully understand. That brings up a whole other issue with AI, the de-skilling of humanity as people offload more and more of their brains to a model that's actually incapable of thinking, but I don't think I'll launch into that particular rant right now. 56 minutes ago, Koobze said: but if I write a function definition and the AI fills it for me, it's absolutely fine? Sorry, this is going to be another wall because I never get to talk about this, but no, actually, that was a huge issue when GitHub Copilot launched (and is still an ongoing one in the background, MUCH lower key though), that they used OSS-licensed code to train models that would generate license-free code, effectively collecting money every month to launder open source code. Caught a lot of flak for it, but as it's more structured and easier to identify, code generation models have features to do exactly that, and identify when a segment of code generated is too similar to an existing segment of code. There's nothing like that for general LLMs or image generation models. The other difference is a social one. Your average programmer is a lot closer to the silicon valley tech bro "fr*ck the consequences, move fast and break things" mentality. Code also has the unique quality that it's finished artifact is more of a liability than an asset, the stories keep rolling in of "My vibe coded SaaS app leaked all my users' information, ate my cat, and revealed my OpenAI key costing me thousands overnight." Another part of the difference is in like - I don't have a good phrase for this - "the degree of scale of influence"? between a function that fills a face and a directly generated face. When you paint, you have a million degrees of freedom. What brush you use, what paint you use, exactly how that paint is loaded onto your brush or knife, the angle, speed, pressure, twist of the brush across your canvas. There's a billion trillion ways to draw a cloud, and whether it's Photoshop or a cave painting, you have that control. Generating a function that draws a face on the other hand, well, we're basically talking about SVG. And AI doesn't make good SVGs. There's less control there, and artistically, basically no influence at all from any existing artist. There is often only one sane way to implement a function, one way to lay out your boilerplate. You don't have those degrees of freedom, and actually AIs are far more effective when they have fewer degrees of freedom, that's why every single Lovable clone all insist on using React+Tailwind. Anyway, that's where another difference creeps in. In programming, code is largely faceless. People have styles, absolutely, but the expressiveness is extremely limited. In that way, the plagiarism is a lot less "substantial" than in art where there are so many more degrees of freedom that people use to create an endless array of unique visual styles. You can draw fruits in a bowl a million ways. You can only write Hello World relatively few (sane) ways. Edited August 14, 2025 by Diff Man I love billionaires. They're just so smart and cool and stuff.
Thorfinn Posted August 14, 2025 Author Report Posted August 14, 2025 53 minutes ago, Koobze said: I find it pretty funny that it is artists and musicians who are the most up in arms about the plagiarism aspect. 54 minutes ago, Koobze said: because of "open source" being a thing that encourages copying and modifying code, it is socially acceptable even though it is clearly removing many jobs and greatly impacting entry-level programmer jobs. This is the major reason I assumed "AI" was not up to the task at the moment. Those whose job it is to stay at the cutting edge of programming don't see it as a threat to their jobs, but only to grunt work. While the stereotypically more emotional artists and musicians, who maybe don't have your background and knowledge base, fear that it will replace them.
Diff Posted August 14, 2025 Report Posted August 14, 2025 Just now, Thorfinn said: This is the major reason I assumed "AI" was not up to the task at the moment. Those whose job it is to stay at the cutting edge of programming don't see it as a threat to their jobs, but only to grunt work. While the stereotypically more emotional artists and musicians, who maybe don't have your background and knowledge base, fear that it will replace them. It is wild that you keep going on about this emotional artist trope, all because someone rejected their craft being referred to as "generating." Again, call up the programmer, ask them if they can teach you how to vibe code like them. The trope you seek will rapidly leave the demographic you're pinning it on.
Thorfinn Posted August 14, 2025 Author Report Posted August 14, 2025 4 minutes ago, Diff said: The problem isn't actually being replaced, the concern at the table is management being taken in by the hype. It's easy to believe the big smart tech men (who totally could have been physicists, it's really important to them that you know they could have done theoretical physics) who extol its virtues to anyone with cash in their pockets. But the risk of AI here isn't even AI. It's management thinking AI can replace jobs that they don't even fully understand. But that's a short-term thing. A phase. If they jump in with both feet, firing all their programmers, and they are wrong, they go broke. It's only if you are wrong that this redounds to their benefit. And either way, it's their money. They can spend it as they like, right? Think of all the business management fads that have taken place over the years, and how many companies have been laid low by falling for the latest one. Creative destruction is a good thing. It weeds out bad ideas.
Thorfinn Posted August 14, 2025 Author Report Posted August 14, 2025 2 minutes ago, Diff said: Again, call up the programmer, ask them if they can teach you how to vibe code like them. I still have no idea what vibe code is.
Krougal Posted August 14, 2025 Report Posted August 14, 2025 5 hours ago, Yakkob said: yeah the same could be said for most of my dad's side of the family... they were jewish, Ukrainian, kulaks... literally all things that would get you g3nocided in the "tolerant soviet union"... the sole reason any of them survived is that my great grandma, her sister and her brother immigrated to the US... otherwise they would be in some mass grave right now... Yeah, Carter may have been one of the worst presidents, but I have to deeply appreciate "Jews for wheat", it helped a lot of people get out of that shithole.
Koobze Posted August 14, 2025 Report Posted August 14, 2025 4 minutes ago, Diff said: ... but as it's more structured and easier to identify, code generation models have features to do exactly that, and identify when a segment of code generated is too similar to an existing segment of code. There's nothing like that for general LLMs or image generation models. Yeah - and I know it was a big hoopla about training on opensource code and laundering it without license etc, but as you said the solution is a post-processing phase that checks "is this copying too closely?" and then changes it enough to be unique. I can write - or ask AI to write - a bunch of image filters, and write a system to detect output that is too-similar to existing art, and then apply my colour-change + distortion filter until it is different enough to pass the test. Does it make it more or less plagiarism? 13 minutes ago, Diff said: There's a billion trillion ways to draw a cloud, and whether it's Photoshop or a cave painting, you have that control. Generating a function that draws a face on the other hand, well, we're basically talking about SVG. And AI doesn't make good SVGs. There's less control there, and artistically, basically no influence at all from any existing artist. There is often only one sane way to implement a function, one way to lay out your boilerplate. You don't have those degrees of freedom So I agree that the difference is in the complexity of the product - the number of unique attributes of the output that allow the freedom of expression. And I agree that writing an SVG generator to make a face is simple and there are not many solutions if you just want "round circle, two dots for eyes, and a line for a mouth". But I would say that a modern SaaS application also has a ton of flexibility in how to write it - the overall architecture and what components you have, the language(s) you use, frameworks, the logical flow through the application - and you can absolutely tweak those using AI. I've asked AI to generate the same code for me multiple times - for example "give me an html file, css file, and js file that'll ingest a JSON file containing kubernetes logs and display them" and each time is different, and I can say "no I do not like this approach of event overloading and instead make it so I have my main function first call parsefile, then store the results in an array named, then visualize it with a RenderLogs function, but actually also do...." - there is a ton of flexibility there. So just by adding functionality to my code I'm increasing the complexity (and creative potential) - so is it the number of lines of code that determine if this is code-plagiarism or some truly creative output that's simply assisted by the AI? 10 minutes ago, Thorfinn said: I still have no idea what vibe code is. I am not 100% sure either but my impression is that it's basically "you're the pointy-haired boss telling the AI what to write, and you just go with it, letting the AI fix the issues as you go". Ultimately - specifically for programming - I find AI assist fundamentally changes my approach to programming. Having been programming for such a long time, part of my design process is banging out code to see what works - this is not an efficient use of AI. Instead (again for myself) I find it front-loads the complexity, so I need to consider the overall architecture and write a proper design document to get good (controllable) output from the LLM. It change the "programming" task into a "software architect" task, which is far more interesting. When an artist chooses a brush, canvas, specific oil paint and specific blend of colours, they are half-experimenting but also drawing on their experience which lets them know that "later on when this water and the sun are drawn, these brush strokes will be like waves". The artist often (not always!) has some conception of what the finished product should look like, and what feelings it should evoke, and I would argue that if an artist were able to write a full "design document" for their painting, the AI could generate it, and for me the result would be something new and not plagiarism.
Diff Posted August 14, 2025 Report Posted August 14, 2025 (edited) 20 minutes ago, Thorfinn said: But that's a short-term thing. A phase. If they jump in with both feet, firing all their programmers, and they are wrong, they go broke. It's only if you are wrong that this redounds to their benefit. And either way, it's their money. They can spend it as they like, right? Think of all the business management fads that have taken place over the years, and how many companies have been laid low by falling for the latest one. Creative destruction is a good thing. It weeds out bad ideas. Yo, tell that to the 2008 recession. The foolishness is not always limited to just their own selves, these are jobs, people, and entire interwoven economies that we're talking about. And the AI bubble is certainly big enough to threaten more than just its own self and its marks. 19 minutes ago, Thorfinn said: I still have no idea what vibe code is. Asking AI for code, and copying and pasting the errors back into the prompt in a loop until it works. It's only suitable (being extremely generous with that word) for crappy weekend projects that will never see the outside of your own local computer. So if you go up to a software engineer who builds systems that can survive trials like "someone put an apostrophe in the username box" and equate their job to a lazy, self-destructive programming fad that can't survive those trials, they will correct you. Just like if you reduce the work a graphic artist does to just "generating images." That's not being emotional. And it's not limited to creative work. Edited August 14, 2025 by Diff Clarity
Krougal Posted August 14, 2025 Report Posted August 14, 2025 5 minutes ago, Diff said: Yo, tell that to the 2008 recession. The foolishness is not always limited to just their own selves, these are jobs, people, and entire interwoven economies that we're talking about. And the AI bubble is certainly big enough to threaten more than just its own self and its marks. Too true! 5 minutes ago, Diff said: Asking AI for code, and copying and pasting the errors back into the prompt in a loop until it works. It's only suitable (being extremely generous with that word) for crappy weekend projects that will never see the outside of your own local computer. So if you go up to a software engineer who builds systems that can survive trials like "someone put an apostrophe in the username box" and equate their job to a lazy, self-destructive programming fad that can't survive those trials, they will correct you. Just like if you reduce the work a graphic artist does to just "generating images." That's not being emotional. And it's not limited to creative work. You're not kidding. I see a lot of this as an engineer. Co-workers would send me POSH code that GPT pulled out of its ass and I would immediately tell them why it didn't work, and I am not the greatest scripter by any means. I haven't even found it good for weekend projects, trying to update 7DTD and MC mods have been time-wasting disasters. GPT would lie and stall for hours until you call it out a few times and then it will finally fess up and apologize for being full of shit and wasting your time. On the flip side, I have a friend who has been a programmer for over 25 years and he swears by the AI allowing him to develop more faster. I'm sure a lot of it comes down to knowing how to task the AI. He uses it to do a lot of grunt work. So in the end, I think human experience and knowledge is still required...for now anyway.
Teh Pizza Lady Posted August 14, 2025 Report Posted August 14, 2025 1 minute ago, Thorfinn said: This is the major reason I assumed "AI" was not up to the task at the moment. Those whose job it is to stay at the cutting edge of programming don't see it as a threat to their jobs, but only to grunt work. While the stereotypically more emotional artists and musicians, who maybe don't have your background and knowledge base, fear that it will replace them. It's pretty hilarious how tools like Github Copilot and Google's Gemini and even ChatGPT are notoriously *bad* at programming, even though Copilot has access to every line of code written ever on Github, Gemini is powered by GOOGLE, and ChatGPT is pretty much the forefront creator of what people think of when you say "AI". Personally I find Claude 3.7 Sonnet to be the only tolerable one out there and even it has it's flaws. But at the same time it has its flaws, it also has its strengths. I would put my head through my desk if I had to write an API accessor for EVE Swagger Interface (ESI) but Claude did it with zero complaints in NodeJS of all languages and packaged it all up nicely for me into a Node package that all I had to do was import it into my own code and I was off to the races! It was also able to crawl the VSAPI docs pages and find the detailed information I needed to understand how to override certain method calls when making my own Vintage Story mod up now on the mod db, just look for Expanded Stomach. AI is a tool and like any tool, it has to be used in the right way. You wouldn't hammer in a screw, or turn a wrench on a wire brad. 1
Koobze Posted August 14, 2025 Report Posted August 14, 2025 6 minutes ago, Krougal said: On the flip side, I have a friend who has been a programmer for over 25 years and he swears by the AI allowing him to develop more faster. I'm sure a lot of it comes down to knowing how to task the AI. He uses it to do a lot of grunt work. So in the end, I think human experience and knowledge is still required...for now anyway. This is exactly how I feel also. If you know what you want, and can describe it to like 95% accuracy, the AI can generate it - the less flexibility you give the AI, the better the result. In my day to day work, it largely automates many small tasks. Most of my programming nowadays is in GoLang where you have a lot of code like: Quote filename := "somefile.doc" mydocument, errorMsg := loadDocument(filename) if errorMsg != nil { logger.Fatalf( "Failed to load document %v: %v", filename, errorMsg ) } else { logger.Infof( "Successfully loaded document %v", filename ) } The italic part is what I would expect the AI assistant built into my IDE (vscode with google's gemini code assist extension) to write for me. On its own this is a very trivial bit of code, but it adds up, and the predictive ability of the AI is really impressive - it can write a lot of code for me very quickly, and I can eyeball it and confirm it does what I would have done (with maybe minor stylistic differences, or different phrasing in the error messages etc). I would equate it with an artist creating a file "box.jpg" and drawing a single line, and the AI fills in the rest of the box. If I am an artist drawing a cityscape that is full of buildings, and I need to draw 2,000 apartment buildings, the question is: is the style of each building and its location relevant to my creative vision? Is any random arrangement of buildings sufficient - maybe I'm preparing this cityscape for a giant asteroid smashing into it, and that's where my creative energies will go? Or maybe I want a specific shape for my city center, and I can draw some elements there to steer the AI, but then the periphery is not as important? Or indeed every single building is important, in which case I can just go draw each one myself and not offload it to AI. 2 minutes ago, traugdor said: AI is a tool and like any tool, it has to be used in the right way. I agree with this 100%.
Teh Pizza Lady Posted August 14, 2025 Report Posted August 14, 2025 2 minutes ago, Koobze said: If I am an artist drawing a cityscape that is full of buildings, and I need to draw 2,000 apartment buildings, the question is: is the style of each building and its location relevant to my creative vision? Is any random arrangement of buildings sufficient - maybe I'm preparing this cityscape for a giant asteroid smashing into it, and that's where my creative energies will go? Or maybe I want a specific shape for my city center, and I can draw some elements there to steer the AI, but then the periphery is not as important? Or indeed every single building is important, in which case I can just go draw each one myself and not offload it to AI. FUN FACT there are image AI processes where you can draw a rough sketch of something and tell it what the sketch is supposed to be and it will draw in the details as best as it can. Image AIs are good for generating concepts and pictures for the latest waifu wars.
Diff Posted August 14, 2025 Report Posted August 14, 2025 17 minutes ago, Koobze said: Yeah - and I know it was a big hoopla about training on opensource code and laundering it without license etc, but as you said the solution is a post-processing phase that checks "is this copying too closely?" and then changes it enough to be unique. I can write - or ask AI to write - a bunch of image filters, and write a system to detect output that is too-similar to existing art, and then apply my colour-change + distortion filter until it is different enough to pass the test. Does it make it more or less plagiarism? Not quite, it doesn't change it enough to be unique. It just throws the whole thing out and tries again. Changing things just a bit in general is not good enough. And the copyrightable image elements embedded inside image generation models are quite lossy and can't easily be matched, I don't believe that's been done before. Well-structured text like code is very simple to detect matches across, and the technology for that is time-tested. Instead, for images current copyright protections are limited to seeing if you put "mickey mouse" in the input prompt. "Cartoon mouse with gloves," however, gets you right through. Ignoring all that, in your hypothetical, it's the same amount of plagiarism, but there is the additional bonus of the color change and distortion making it obvious that plagiarism was intentional and not incidental. 24 minutes ago, Koobze said: So I agree that the difference is in the complexity of the product - the number of unique attributes of the output that allow the freedom of expression. And I agree that writing an SVG generator to make a face is simple and there are not many solutions if you just want "round circle, two dots for eyes, and a line for a mouth". But I would say that a modern SaaS application also has a ton of flexibility in how to write it - the overall architecture and what components you have, the language(s) you use, frameworks, the logical flow through the application - and you can absolutely tweak those using AI. I've asked AI to generate the same code for me multiple times - for example "give me an html file, css file, and js file that'll ingest a JSON file containing kubernetes logs and display them" and each time is different, and I can say "no I do not like this approach of event overloading and instead make it so I have my main function first call parsefile, then store the results in an array named, then visualize it with a RenderLogs function, but actually also do...." - there is a ton of flexibility there. So just by adding functionality to my code I'm increasing the complexity (and creative potential) - so is it the number of lines of code that determine if this is code-plagiarism or some truly creative output that's simply assisted by the AI? There absolutely is a ton of flexibility, and even creativity that is possible in code. Like I said, there's several sane ways to write Hello World, let alone anything more complex. But there's substantially less freedom than you have when you put digital brush to digital canvas. The comparison there was in amount of degrees in freedom, I don't believe there's any bar you could reach that could make a bit of code as expressive as a post-it note doodle. Code just doesn't have the information density to have the fingerprints embedded in it the same way. So in programming, the plagiarism is much more straightforward and use basically the same anti-plagiarism systems that we built up for humans. Text is nice and well behaved, and the existing plagiarism systems are perfectly capable of looking past slightly changed variable names. "Code plagiarism" isn't dependent on lines of code, it's dependent on whether this segment of generated code is too closely related to existing code. So there's two issues here that I think are being conflated. 1. The fundamental ethical issues with AI ripping off things without permission in order to try to put people out of a job and then sell the product of those jobs back to us. And 2. The legal copyright issues that arise because language models do encode a fair share of copyrighted material that is encoded faithfully enough to get you in legal trouble if it's ever output by the model. When we talk about output, we're talking about the legal hot water you'll get in. When we talk about the ethical issues with AI, it's about using people's work to put them out of a job, or at least sell the idea of putting them out of a job to people. Yes, it's also unethical to launder FOSS-licensed code or someone's collection of art through an AI, but it's also illegal enough to get you in some hot water.
LadyWYT Posted August 14, 2025 Report Posted August 14, 2025 2 hours ago, Thorfinn said: Weirdly enough, at the outset, I was pretty sure it was not up to the task. And obviously it will be plagiarism if you define that sufficiently broadly. Is a bar band doing a cover "plagiarism"? Used to be that didn't bother anyone. Now if a computer is told to make a cover, even if it does a crappy job, people run around like their hair is on fire. People who should know better, since there's no real risk of AI ever being serious competition... I think the main risk of AI being "competition" is that AI has no human element, so you don't need to pay it or worry about it questioning your motives or anything like that. And certain individuals/corporations will do anything for a dollar, so if they can just replace people with AI and still get everyone to pay for the product...anyway, you get the idea. As for cover bands and what's plagiarism versus what's not...I think it all boils down to the intent behind the action. A cover band might not have written the song themselves, but they also aren't trying to claim that as their own work either. It's a homage to the original artist, and a good homage is going to point the viewers/listeners to the original. Actual plagiarism is taking someone else's work and trying to pass it off as your own, and in that case whoever is doing the plagiarizing isn't going to be giving credit to the original creator. You also have those that don't plagiarize, but do steal the creative work of others and try to make money off it. When it comes to AI, it needs a library of reference material to generate things from, since it cannot create. Unfortunately, what goes into the reference library isn't always open-source, public domain, or otherwise content that was given with permission. 1 1
Krougal Posted August 14, 2025 Report Posted August 14, 2025 Well, my question is does it actually generate the images or does it just go out and search and grab whatever it needs and edit it?
Diff Posted August 14, 2025 Report Posted August 14, 2025 5 minutes ago, Krougal said: Well, my question is does it actually generate the images or does it just go out and search and grab whatever it needs and edit it? Both. It goes out and searches and grabs everything, distills it down into a statistical model during training, and then coughs up images based on that model which can often include whole existing images at a high degree of fidelity. It can't generate from nothing, but it doesn't actively search the internet, the searching is already done. 1
LadyWYT Posted August 14, 2025 Report Posted August 14, 2025 25 minutes ago, Krougal said: Well, my question is does it actually generate the images or does it just go out and search and grab whatever it needs and edit it? 17 minutes ago, Diff said: Both. It goes out and searches and grabs everything, distills it down into a statistical model during training, and then coughs up images based on that model which can often include whole existing images at a high degree of fidelity. It can't generate from nothing, but it doesn't actively search the internet, the searching is already done. I think this is also the main reason many artists get upset. Many share their work on social media, as that's a very good way to get your name out there and make potential customers aware of your products. However, my understanding is that the way some of these AIs are trained is...basically just scraping the internet for any images or other data it can find, and using it without regard to copyright or other protections that may apply. 1
Krougal Posted August 14, 2025 Report Posted August 14, 2025 2 minutes ago, LadyWYT said: I think this is also the main reason many artists get upset. Many share their work on social media, as that's a very good way to get your name out there and make potential customers aware of your products. However, my understanding is that the way some of these AIs are trained is...basically just scraping the internet for any images or other data it can find, and using it without regard to copyright or other protections that may apply. Everyone wants to be an influencer and content creator and live off the advertising revenue. Every guide I (and everyone else) have ever put on Steam, I can find on a bunch of other sites, they don't even bother changing it, like I made a standard disclaimer that says "If you are reading this guide any place other than Steam, it has been reproduced without my permission and is likely not up to date." and they don't even bother to remove that paragraph. I bet the person they attribute as the author isn't even a real person either. Just some stock art and a bullshit blurb, like the marketing characters on food products.
Teh Pizza Lady Posted August 14, 2025 Report Posted August 14, 2025 37 minutes ago, Krougal said: Well, my question is does it actually generate the images or does it just go out and search and grab whatever it needs and edit it? Initially image AIs are trained to recognize patterns. If you include a picture of detailed eyes, then you will need to tell the AI that it's a picture of "detailed_eyes" and it will catalogue everything it can about that picture under a tag "detailed_eyes". If you have a picture of a person wearing a kimono and you tag the picture with "kimono" then it will catalogue everything it can about that image under "kimono". The fun part comes when you start adding multiple tags to an image. The AI will start to catalogue the image under multiple categories. Then it takes everything it knows about that category and averages out the data so that when you give it an image of a kimono without tags, it will understand that it's looking at a picture of a kimono. It might also categorize a picture of a dress in the same way and if it knows nothing about suits, it might also say that a 3-piece suit is also a kimono. The important thing here is that what you're creating is what's called in the computing world a "neural network". The more complex the network, the more data it can hold, but consequently, the longer it takes to train it properly to correctly identify what is in an image. Now some brilliant data scientists asked the question, "Can we reverse this neural network (NN) so that it can recall what it knows about things and reproduce it from memory?" The answer initially was no. The data path through the NN was one-way. But then someone said, "What if we feed the NN some random noise and then cycle through the NN and only keep the noise that activates the "neurons" that recognize what we want in the image?" This is a process called sampling and it done using an algorithm called a sampler. The sampler requires a model which is a collection of everything the NN knows about image data. By passing over the noisy data in several iterations it can and will eventually stop removing data that triggers other neurons (or nodes) in the NN until you're left with an image that only triggers the nodes required by your input prompt. So you ask the question, "Does the AI actually generate images?" No. It does not. It generates noise and then go back and starts removing it. You could say it degenerates...which is quite fitting considering what most people use image AIs for. 1
Diff Posted August 14, 2025 Report Posted August 14, 2025 (edited) 45 minutes ago, Krougal said: Everyone wants to be an influencer and content creator and live off the advertising revenue. Every guide I (and everyone else) have ever put on Steam, I can find on a bunch of other sites, they don't even bother changing it, like I made a standard disclaimer that says "If you are reading this guide any place other than Steam, it has been reproduced without my permission and is likely not up to date." and they don't even bother to remove that paragraph. I bet the person they attribute as the author isn't even a real person either. Just some stock art and a bullshit blurb, like the marketing characters on food products. The difference is that this isn't some sort of influencer grind hustle. Concept artists, character designers, texturers, 3D modelers, animators, product designers, graphic designers, package designers, motion designers, their art is their work. They work a 9-5 and that's how they get paid. Every product you use and interact with must be designed. Someone has to make that stock art on that food product. Somebody has to design the cereal mascot and animate them for ads. Somebody has to do the VFX work to composite an animated character into live footage. Somebody has to do the color grading on that footage to make sure it doesn't look like a high school A/V club project. These are all individually, actual, full-time jobs, it's worth not feeding it into the unholy silicon valley torment nexus. Edited August 14, 2025 by Diff Clarity and benevolence towards humankind 1 1
Teh Pizza Lady Posted August 14, 2025 Report Posted August 14, 2025 As a side note, one of the things that AI struggles with is hands because hands can take the shape of all kinds of poses. What did the big brains do? They fed those images back into the AI and marked them with the tag "bad_hands" and added a 2nd component to the sampling method: The Negative Prompt. The sampler also removes noise that triggers nodes in the NN that match up with the negative prompt. So if your prompt was "man" and your negative was "mustache" then it would eventually settle on an approximation of an image of a man, but would do its best not to have the approximation include a mustache.
Koobze Posted August 14, 2025 Report Posted August 14, 2025 13 minutes ago, Diff said: but it doesn't actively search the internet, the searching is already done. Some newer models do indeed search the internet, but it's not the same as the "distilling into a statistical model" phase but more like a kid doing an open-book exam and having the reference material there to copy from without really understanding. The statistical model part can generate some wild stuff though, if you remember those crazy pictures that were super-trippy with like eyes appearing in clouds and stuff. I think if models would skew the generated results using the actual contents/structure of their statistical model (which would be unique to each model based on the training process) the results could be considered much more unique and slightly less derivative. 29 minutes ago, Diff said: So there's two issues here that I think are being conflated. 1. The fundamental ethical issues with AI ripping off things without permission in order to try to put people out of a job and then sell the product of those jobs back to us. And 2. The legal copyright issues that arise because language models do encode a fair share of copyrighted material that is encoded faithfully enough to get you in legal trouble if it's ever output by the model. I agree, but also disagree There are people who can look at a drawing and 100% recreate it themselves by hand, people have pitch-perfect memory and could play a Chopin piano piece perfectly, and even if you increase the challenge and they need to study the material for years they can still eventually recreate the original - and it's generally allowed. The process of learning from and copying source material is not unique to humans and I think there's nothing wrong with AI doing it, except indeed the moral aspect of "stealing someone's work without permission" and then "reproducing their work, to supplant the original author's role". I have personally learned a great deal from copyrighted material, and used my gained knowledge to reproduce it (not 100% identically) and gotten paid for it - that was the point of going to school and learning a topic on which to build a career. So the process of "learning from copyrighted material, distilling it into a statistical model (in my head), and making money from it by reproducing that material cheaper (as a junior developer)" is allowed for humans, but not for AI? I feel like it is the same thing we all already do to some extent, only the AI can do it much faster, and much better. The friction is due to how quickly some of these human efforts are being replaced. 1 hour ago, LadyWYT said: A cover band might not have written the song themselves, but they also aren't trying to claim that as their own work either. It's a homage to the original artist, and a good homage is going to point the viewers/listeners to the original. Theoretically if the AI could remember 100% of the input data, and you asked it to generate a picture of something, and then with that it gave you a "bill of materials" saying: this is 1% RandomDude2690 from tumbler, 1% Rembrandt, 1% .... - would that make it ok? You would have attribution and could then follow along to the source for the original if you wanted to. But, as we can see with AI summaries by search engines, attribution is often not worth a cent, and even a flawed reproduction is "good enough" for very many people. 32 minutes ago, Diff said: It's an actual job, it's worth not feeding it into the unholy silicon valley torment nexus. So automation should not replace actual jobs? I think we are hundreds or thousands of years too late for that. Again to me it feels like it's the scale and velocity at which the AI can do what we humans have been doing for millennia that's causing the problems. Just to be clear here - I am personally terrified about the future and my own future employment prospects, never mind my child's future prospects. I was surprised and somewhat insulted to find AI-generated music in my Spotify playlist, and considered canceling my subscription entirely, though simply blocking AI-generated music is a good enough solution (for now). I hate AI slop on the internet, and do think that every AI company that trained on copyrighted material should be fined so heavily that they are bankrupted and must start again training on licensed (or truly, intentionally "free") material. However, the technology is there and won't disappear, it is useful, and it is just a tool. I think that the purpose of society is to ensure that society can progress and thrive in a way that benefits everyone, so it's up to "all of us" to ensure that this tool is used responsibly. The truly unfortunate part is that all of us have a vote, but the richest individuals and corporations are the ones who have the biggest votes, and also have the most to gain and the least to lose when this tool goes wild. 1
LadyWYT Posted August 14, 2025 Report Posted August 14, 2025 44 minutes ago, Krougal said: Everyone wants to be an influencer and content creator and live off the advertising revenue. Many do, but not everyone can. It takes a lot of work and skill to operate at a level that can generate a livable income from that. 46 minutes ago, Krougal said: Every guide I (and everyone else) have ever put on Steam, I can find on a bunch of other sites, they don't even bother changing it, like I made a standard disclaimer that says "If you are reading this guide any place other than Steam, it has been reproduced without my permission and is likely not up to date." and they don't even bother to remove that paragraph. I bet the person they attribute as the author isn't even a real person either. Just some stock art and a bullshit blurb, like the marketing characters on food products. I mean, it's possible, but fake personas were already a thing long before AI and computers, and aren't always bad. Many authors use pen names and actors use stage names, for example. However, that's getting into the weeds. I think again, it's a case that boils down to intent behind action, as well as...how many times can a concept be floated by various people, and still remain a unique thing? If I'm not mistaken, this is an issue regarding software that checks plagiarism, for example, since it has thrown false positives when no plagiarism was committed. That's not to say someone didn't just copy/paste words or images and try to pass it off as their own work, but it doesn't mean that everything similar is an attempt at theft. Sometimes people will come up with the same idea or concept independently of each other, and there's only so many ways to write certain phrases and whatnot. 38 minutes ago, Diff said: The difference is that this isn't some sort of influencer grind hustle. Concept artists, character designers, texturers, 3D modelers, animators, product designers, graphic designers, package designers, motion designers, their art is their work. They work a 9-5 and that's how they get paid. Every product you use and interact with must be designed. Someone has to make that stock art on that food product. Somebody has to design the cereal mascot and animate them for ads. Somebody has to do the VFX work to composite an animated character into live footage. Somebody has to do the color grading on that footage to make sure it doesn't look like a high school A/V club project. It's an actual job, it's worth not feeding it into the unholy silicon valley torment nexus. Pretty much this. My general impression is that most of those chasing the "influencer" lifestyle, aren't really interested in putting in the actual work to realize their goal, as much as they are just looking for a way to make a quick buck and feed an ego. I'm not saying every influencer is like that either, that's just my general observation/opinion on the type that the lifestyle seems to attract. My overall opinion on AI is that it's a tool useful for drumming up ideas, memeing, or organizing information, but achieving the best results for a finished product requires human talent.
LadyWYT Posted August 14, 2025 Report Posted August 14, 2025 2 minutes ago, Koobze said: I feel like it is the same thing we all already do to some extent, only the AI can do it much faster, and much better. The friction is due to how quickly some of these human efforts are being replaced. I wouldn't say AI can do it better, just that it can easily achieve a passable result while cutting out most of the human element entirely(which people, as a rule, can be difficult to work with). The latter is spot-on though. 3 minutes ago, Koobze said: Theoretically if the AI could remember 100% of the input data, and you asked it to generate a picture of something, and then with that it gave you a "bill of materials" saying: this is 1% RandomDude2690 from tumbler, 1% Rembrandt, 1% .... - would that make it ok? You would have attribution and could then follow along to the source for the original if you wanted to. But, as we can see with AI summaries by search engines, attribution is often not worth a cent, and even a flawed reproduction is "good enough" for very many people. I think it's another case of...it depends on the intent behind the product's creation. For personal use, memes, and the like, I don't really see a problem with using AI to spit out things. When it comes to actually getting paid for the product though, it's a different beast, since there is an exchange of goods/services taking place. 7 minutes ago, Koobze said: So automation should not replace actual jobs? I think we are hundreds or thousands of years too late for that. Again to me it feels like it's the scale and velocity at which the AI can do what we humans have been doing for millennia that's causing the problems. I also think a lot of the friction stems from the original expectations of automation versus how it's actually being put into practice. Automation can be a very good thing, since it makes certain goods more obtainable, or certain jobs safer since humans don't have to physically do the dangerous parts any longer. In theory, that means that robots can do the boring grunt work that no one wants to do because it's boring or dangerous, which leaves people free to pursue careers that are more rewarding. In practice though...yes, we got some of the benefits of automation, but it's also proving that instead of using robots to do dangerous or boring repetitive jobs...they're being used instead to push people out of more desirable work and into boring/dangerous work instead(assuming there's even jobs available). Nasty can of worms, that, and again, boils down to intent of action. 13 minutes ago, Koobze said: However, the technology is there and won't disappear, it is useful, and it is just a tool. I think that the purpose of society is to ensure that society can progress and thrive in a way that benefits everyone, so it's up to "all of us" to ensure that this tool is used responsibly. The truly unfortunate part is that all of us have a vote, but the richest individuals and corporations are the ones who have the biggest votes, and also have the most to gain and the least to lose when this tool goes wild. Pretty much. The tool itself is neither good or bad--the context of such depends entirely on how one uses the tools at their disposal. When the tool is used for the wrong purposes, bad things happen. 1
Krougal Posted August 14, 2025 Report Posted August 14, 2025 57 minutes ago, Diff said: The difference is that this isn't some sort of influencer grind hustle. Concept artists, character designers, texturers, 3D modelers, animators, product designers, graphic designers, package designers, motion designers, their art is their work. They work a 9-5 and that's how they get paid. Every product you use and interact with must be designed. Someone has to make that stock art on that food product. Somebody has to design the cereal mascot and animate them for ads. Somebody has to do the VFX work to composite an animated character into live footage. Somebody has to do the color grading on that footage to make sure it doesn't look like a high school A/V club project. It's an actual job, it's worth not feeding it into the unholy silicon valley torment nexus. Yes, and I was talking about the blatant rip off ones. It was in no way meant to disparage these people or the tremendous amount of work they do. @Koobze makes some really good points though. There never has been any protection against automation replacing people, it may be painful for those involved, but it is part of progress. It even weighs on me lately the fact that every system I automate and optimize puts good people out of work. Eventually I will automate my own job away.
Recommended Posts