-
Posts
303 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Blogs
News
Store
Everything posted by Diff
-
Honestly would love customizable crafting of functional items in general. Like, smith out your own sword or knife blades, as long as it's close enough to the existing pattern. Clayform custom storage vessels as long as it's watertight and has enough interior empty voxels. Or being able to add decorative chiseled elements and accessories to armor or clothes.
-
Am I looking at macOS running on a Lenovo laptop?
-
Welp. That's this thread for me, see y'all in the next one.
-
True, every person affected by the global recession did in fact live in a society, but that's quite the leap to get to "this is your fault, actually." Let's assign blame where it's clearly due rather than shrugging our shoulders at big problems because big solutions are hard to come by. I learned in middle school that the antidote to plagiarism is attribution. It's not enough to say "hey yeah I stole this," you must cite where you draw from. Man those are some funky nonstandard definitions you've got there.
-
What are you wanting out of a mod manager that we don't have now? One-click installs from the ModDB and being able to enable/disable things in the game itself seems pretty snazzy, but maybe I've been playing/modding too many 2010s games with awful modding support lately to know what's good.
-
It's something we'll have to deal with sooner rather than later. If we can automate all jobs, what do the people do? Right now we're (according to silicon valley) on the path to automating the creative work and leaving only limited drudgery, tedium, and manual labor for the humans. And eventually, not even that. By having AI gated behind large companies, we divert the pay that would go to the workers instead to the executives and an increasingly small numbers of people working to put themselves out of work, like you say. The goal here isn't enhanced creation or the transformation of jobs through the assistance of AI. It is explicitly the elimination of jobs to reduce workforce costs and squeeze more work out of fewer employees, and ideally no employees except an executive giving orders to agents who give orders to sub-agents, the same as it ever was, except with the AI industry collecting everybody's paychecks for them. This isn't pain in service progress any more than the 2008 recession was. It's profit-chasing whose critical long term faults are being ignored in favor of short term gains. There are negative externalities that won't bear to be ignored for very long. A post-scarcity world demands radical social change, as our current social structure would see us all unemployed and on less-than-subsistence government assistance with no prospects or opportunity for growth. Introducing a post-scarcity world into our society is not something that can tolerate Silicon Valley's sacred dogma of "move fast and break things."
-
The difference is that this isn't some sort of influencer grind hustle. Concept artists, character designers, texturers, 3D modelers, animators, product designers, graphic designers, package designers, motion designers, their art is their work. They work a 9-5 and that's how they get paid. Every product you use and interact with must be designed. Someone has to make that stock art on that food product. Somebody has to design the cereal mascot and animate them for ads. Somebody has to do the VFX work to composite an animated character into live footage. Somebody has to do the color grading on that footage to make sure it doesn't look like a high school A/V club project. These are all individually, actual, full-time jobs, it's worth not feeding it into the unholy silicon valley torment nexus.
-
Both. It goes out and searches and grabs everything, distills it down into a statistical model during training, and then coughs up images based on that model which can often include whole existing images at a high degree of fidelity. It can't generate from nothing, but it doesn't actively search the internet, the searching is already done.
-
Not quite, it doesn't change it enough to be unique. It just throws the whole thing out and tries again. Changing things just a bit in general is not good enough. And the copyrightable image elements embedded inside image generation models are quite lossy and can't easily be matched, I don't believe that's been done before. Well-structured text like code is very simple to detect matches across, and the technology for that is time-tested. Instead, for images current copyright protections are limited to seeing if you put "mickey mouse" in the input prompt. "Cartoon mouse with gloves," however, gets you right through. Ignoring all that, in your hypothetical, it's the same amount of plagiarism, but there is the additional bonus of the color change and distortion making it obvious that plagiarism was intentional and not incidental. There absolutely is a ton of flexibility, and even creativity that is possible in code. Like I said, there's several sane ways to write Hello World, let alone anything more complex. But there's substantially less freedom than you have when you put digital brush to digital canvas. The comparison there was in amount of degrees in freedom, I don't believe there's any bar you could reach that could make a bit of code as expressive as a post-it note doodle. Code just doesn't have the information density to have the fingerprints embedded in it the same way. So in programming, the plagiarism is much more straightforward and use basically the same anti-plagiarism systems that we built up for humans. Text is nice and well behaved, and the existing plagiarism systems are perfectly capable of looking past slightly changed variable names. "Code plagiarism" isn't dependent on lines of code, it's dependent on whether this segment of generated code is too closely related to existing code. So there's two issues here that I think are being conflated. 1. The fundamental ethical issues with AI ripping off things without permission in order to try to put people out of a job and then sell the product of those jobs back to us. And 2. The legal copyright issues that arise because language models do encode a fair share of copyrighted material that is encoded faithfully enough to get you in legal trouble if it's ever output by the model. When we talk about output, we're talking about the legal hot water you'll get in. When we talk about the ethical issues with AI, it's about using people's work to put them out of a job, or at least sell the idea of putting them out of a job to people. Yes, it's also unethical to launder FOSS-licensed code or someone's collection of art through an AI, but it's also illegal enough to get you in some hot water.
-
Yo, tell that to the 2008 recession. The foolishness is not always limited to just their own selves, these are jobs, people, and entire interwoven economies that we're talking about. And the AI bubble is certainly big enough to threaten more than just its own self and its marks. Asking AI for code, and copying and pasting the errors back into the prompt in a loop until it works. It's only suitable (being extremely generous with that word) for crappy weekend projects that will never see the outside of your own local computer. So if you go up to a software engineer who builds systems that can survive trials like "someone put an apostrophe in the username box" and equate their job to a lazy, self-destructive programming fad that can't survive those trials, they will correct you. Just like if you reduce the work a graphic artist does to just "generating images." That's not being emotional. And it's not limited to creative work.
-
It is wild that you keep going on about this emotional artist trope, all because someone rejected their craft being referred to as "generating." Again, call up the programmer, ask them if they can teach you how to vibe code like them. The trope you seek will rapidly leave the demographic you're pinning it on.
-
The problem isn't actually being replaced, the concern at the table is management being taken in by the hype. It's easy to believe the big smart tech men (who totally could have been physicists, it's really important to them that you know they could have done theoretical physics) who extol its virtues to anyone with cash in their pockets. But the risk of AI here isn't even AI. It's management thinking AI can replace jobs that they don't even fully understand. That brings up a whole other issue with AI, the de-skilling of humanity as people offload more and more of their brains to a model that's actually incapable of thinking, but I don't think I'll launch into that particular rant right now. Sorry, this is going to be another wall because I never get to talk about this, but no, actually, that was a huge issue when GitHub Copilot launched (and is still an ongoing one in the background, MUCH lower key though), that they used OSS-licensed code to train models that would generate license-free code, effectively collecting money every month to launder open source code. Caught a lot of flak for it, but as it's more structured and easier to identify, code generation models have features to do exactly that, and identify when a segment of code generated is too similar to an existing segment of code. There's nothing like that for general LLMs or image generation models. The other difference is a social one. Your average programmer is a lot closer to the silicon valley tech bro "fr*ck the consequences, move fast and break things" mentality. Code also has the unique quality that it's finished artifact is more of a liability than an asset, the stories keep rolling in of "My vibe coded SaaS app leaked all my users' information, ate my cat, and revealed my OpenAI key costing me thousands overnight." Another part of the difference is in like - I don't have a good phrase for this - "the degree of scale of influence"? between a function that fills a face and a directly generated face. When you paint, you have a million degrees of freedom. What brush you use, what paint you use, exactly how that paint is loaded onto your brush or knife, the angle, speed, pressure, twist of the brush across your canvas. There's a billion trillion ways to draw a cloud, and whether it's Photoshop or a cave painting, you have that control. Generating a function that draws a face on the other hand, well, we're basically talking about SVG. And AI doesn't make good SVGs. There's less control there, and artistically, basically no influence at all from any existing artist. There is often only one sane way to implement a function, one way to lay out your boilerplate. You don't have those degrees of freedom, and actually AIs are far more effective when they have fewer degrees of freedom, that's why every single Lovable clone all insist on using React+Tailwind. Anyway, that's where another difference creeps in. In programming, code is largely faceless. People have styles, absolutely, but the expressiveness is extremely limited. In that way, the plagiarism is a lot less "substantial" than in art where there are so many more degrees of freedom that people use to create an endless array of unique visual styles. You can draw fruits in a bowl a million ways. You can only write Hello World relatively few (sane) ways.
-
In all good humor, if you're confident about that, then it's because you do not grasp how it works. Here, take a much simpler example: pixels. Just pixels. This is a highly desired, highly trained on, and highly used example. But image generation models can't do it. It can't even keep the pixels square. It cannot commit to an "internal" grid size to align the pixels to because each part of the image is done practically independently. This conflict with the underlying way that AI image generators work means that despite the demand, despite the simplicity of the requirements, and despite the amount of time that has passed, there are NO models in existence that can generate pixel art assets that would be usable or acceptable in a real project. And that's just due to one of the fundamental issues with these models. There are others. Tiling takes that up to 11, for a laundry list of additional reasons that all boil down to a similar "the way that this must be done is just fundamentally, entirely at odds with how the technology operates at its most primitive levels," but with the difficulty curved ramped up exponentially. There's no waiting around for this to get better because this is baked into the foundations. It's part of the basic premise of how the technology operates that this *can't* work. Abusive's a strong word. It sounds like it's for the best that you got out of the business, then. Regardless, there's been no abuse here and I'd maintain that it's entirely reasonable for someone to insist on the earlier distinction between artists and the latest silicon valley hype fad whose stated purpose is to obsolete them. If you described a systems engineers job as vibe coding, unless they're a doormat, they'd correct you.
-
I exclusively play VS on my Steam Deck. Or any game, really. Analog movement and trackpad+gyro camera is nearly the best way to control a game imo. For VS specifically, I've got the grips set to Ctrl and Shift, I've got one grip set to toggle Alt on and off, Dpad for frequently-used functions like dropping, handbook, and switching tools, and you can bind the left trackpad and right joystick to menus to fill in the less used buttons.
- 3 replies
-
- linux
- steam deck
-
(and 1 more)
Tagged with:
-
It doesn't work like that. That's not *really* a thing you can do. You don't program an AI like that, AI's fundamentally not really capable of understanding tiling well, and in general the more requirements you lay down, the more it will just ignore you. Smooth tiling is also difficult to add as a post-processing layer, and even if you accomplish that, all you can do is re-roll and cross your fingers because you cannot make the AI understand what you have just programmed, you can only pass/fail the output and hope that by pure chance one makes it through the barrier while you keep racking up increasing bills. As someone who spans both graphic design and programming in their career, AI really cannot automate anything you care to. It's good at demos. It falls flat immediately on contact with reality. And with something like a texture pack, AI is going to be fighting you every step of the way. With the fundamental issues with the way image generation works being at odds with tiling, and the fundamental issues with the way image generation works being at odds with getting a cohesive visual style, and the fundamental issues... don't believe the hype. Ah yes, snap firings based on trivialities like using correct terminology in a professional setting, definitely a good way to keep your employees from walking on eggshells...
-
One of my favorite things to do in games is to bind the left trackpad to a scroll wheel with counter/clockwise rotations bound to scroll up/down. Depending on the game, it can also be nice to bind it to a radial menu with items for 0-9 to hop to specific spots on the hotbar.
-
Anyone else running into issues with the on screen keyboard on Deck? Lately pressing the Steam+X chord, or even creating a brand new binding for it, doesn't work. The keyboard summoning sound plays, and if I bring up one of the overlays, it'll be there, but not when the actual game is focused.