How well these things work is pretty much a matter of the quality of the input data set combined with the amount of computing power expended in training it. The models will very broadly reproduce associated parts (for example, if there are no items in the input corpus labeled as "Mike Schley", then there won't be anything generated in broadly that style). The cost of training the models is astronomical (I saw numbers on the order of $5M for training GPT-3 and $600K for Stable Diffusion), but generating outputs from the model takes far less computing power and is well within the reach of common home systems. It really comes down to how long you're willing to wait to get the results because more compute gets results faster.
Because the input corpus for most of the current models (DALL-E, Stable Diffusion, MidJourney) wasn't particularly well vetted as far as terms-of-use, the outputs are equally suspect as far as reuse goes. I saw an article where Getty Images won't accept results generated from these systems because of this issue. For example, if someone goes out and finds everything with "Mike Schley" as an associated term and trains a model from that input, then the outputs from that model are totally unfit for any purpose except sale by Mike Schley.
There was a nice article the other day (I don't recall the source now because I'm getting old and feeble-minded) about someone who used Stable Diffusion outputs for generating palettes from a prompt. Basically, it generates an image, quantizes the image down to something like a 16-color image and uses the palette from that. It's useful for getting colors for a text-described mood, but ultimately will just be a reflection of the training data. Stable Diffusion, for example, was trained on the LAION-Aesthetics V2 dataset, which has a lot of brightly-colored artwork. That means that you'll probably get very nicely colored palettes.
Style transfer has been a fun toy in the last few years and it's very broadly in the same vein as image synthesis. Being able to push an artist's style onto a photograph is quite fun!
Quite aside from image generators, the code models are very scary. GitHub co-pilot, for example, will generate large bodies of code based on a large number (perhaps most of) the public repositories on GitHub. In theory, it was only supposed to use only public and permissively-licensed repositories, but I've Twitter posts from folks whose opinion I respect that they were able to get it to reproduce significant bodies of non-public code, including the non-permissive license statements. There is a checkbox that says that it's not supposed to include that, but the outputs were interesting as well. It certainly generated a large amount of code very quickly, including some results that were flat-out wrong. I suspect that the future may hold writing tests for machine-generated code for far too many new entrants to the field.