A.I Generated PAWGS. It’s ovaaaaa

OnFleekTing

Superstar
Joined
Jul 24, 2015
Messages
4,804
Reputation
115
Daps
20,460
Reppin
DMV
Give it some time
Porn is gonna be dead if this reaches its natural development cycle

AI porn with fake voices and fake bodies will be deemed safe
real porn will be deemed toxic and regressive

Don't even care that ai generated content can't get the fingers right society will prop it up
 

Thurgood Thurston III

#LLNB #LLLB #E4R | woooK nypdK
Joined
Nov 20, 2017
Messages
11,747
Reputation
4,590
Daps
54,054
Reppin
The kirk to the fields

q9d3mnzoi8fa1.png

v3854acmh8fa1.jpg

t3_10ot5oi

fcap9t03ldz91.png

40n9s5dsw85a1.png
 

eXodus

Superstar
Supporter
Joined
May 28, 2012
Messages
8,246
Reputation
3,970
Daps
47,393
Reppin
NULL

q9d3mnzoi8fa1.png

v3854acmh8fa1.jpg

t3_10ot5oi

fcap9t03ldz91.png

40n9s5dsw85a1.png
I feel like stable diffusion has majorly improved! cuz I remember in the original Midjourney thread tryna use both and Midjourney shytting on it, prompt for prompt- - bar for bar
 

Thurgood Thurston III

#LLNB #LLLB #E4R | woooK nypdK
Joined
Nov 20, 2017
Messages
11,747
Reputation
4,590
Daps
54,054
Reppin
The kirk to the fields
I feel like stable diffusion has majorly improved! cuz I remember in the original Midjourney thread tryna use both and Midjourney shytting on it, prompt for prompt- - bar for bar
Because the key is using the custom UI's

Here's a list of features from just one of them called Automatic1111. I took it straight from the Github page:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Color Sketch
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) - will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) - alternative syntax
    • select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
    • train embeddings on 8GB (also reports of 6GB working)
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR and Swin2SR(see here), neural network upscalers
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
    • Adjust sampler eta values (noise multiplier)
    • More advanced noise setting options
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Live prompt token length validation
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
    • drag and drop an image/text-parameters to promptbox
  • Read Generation Parameters Button, loads parameters in promptbox to UI
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
    • Can use a separate neural network to produce previews with almost none VRAM or compute requirement
  • Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative, reverse Euler method of cross attention control
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts
  • xformers, major speed increase for select cards: (add --xformers to commandline args)
  • via extension: History tab: view, direct and delete images conveniently within the UI
  • Generate forever option
  • Training tab
    • hypernetworks and embeddings options
    • Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
  • Clip skip
  • Hypernetworks
  • Loras (same as Hypernetworks but more pretty)
  • A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt.
  • Can select to load a different VAE from settings screen
  • Estimated completion time in progress bar
  • API
  • Support for dedicated inpainting model by RunwayML.
  • via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of GitHub - vicgalle/stable-diffusion-aesthetic-gradients: Personalization for Stable Diffusion via Aesthetic Gradients 🎨)
  • Stable Diffusion 2.0 support - see wiki for instructions
  • Alt-Diffusion support - see wiki for instructions
Stable Diffusion's strength is in customization which can let you generate high quality images if you know what you're doing.
 

TaxCollector13459

2018 Coli Rookie of the Year
Joined
Mar 30, 2018
Messages
8,319
Reputation
1,662
Daps
19,637
They look real but they have a waxy hint to them. I don't know how else to explain it. If you ever seen the asian woman with the prosthetic cleavage you would know what I mean. You have to do a double take because you not sure what's off about they titty skin. And honestly I didn't even notice the fingers until someone pointed it out down thread. I was focused on the titty meats.

It’s like the skin don’t have flaws
 
Top