AI Has Already Created As Many Images As Photographers Have Taken in 150 Years. Statistics for 2023

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229


AI Has Already Created As Many Images As Photographers Have Taken in 150 Years. Statistics for 2023​

In the past year, dozens of communities dedicated to AI art have accelerated across the Internet, from Reddit to Twitter to Discord, with thousands of AI artists practicing their skills to create precise prompts and sharing the results with others. The amount of content created during this time is hard to measure, but whatever it is, it’s incredibly big. We’ve kept track of some AI image statistics and facts and tried to estimate (at least roughly) how much content has been created since text-to-image algorithms took off last year. Read on to learn more about how we arrived at this number and how some of the most prominent algorithms contribute to it.

Featured image created by Discord user Sixu using Midjourney
AI image statistics: Number of AI-created images

Key Insights​

  • More than 15 billion images created using text-to-image algorithms since last year. To put this in perspective, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach the 15 billion mark.
  • Since the launch of DALLE-2, people are creating an average of 34 million images per day.
  • The fastest-growing product is Adobe Firefly, the suite of AI algorithms built into Adobe Photoshop. It reached 1 billion images created in just three months since its launch.
  • Midjourney has 15 million users, the largest user base of any image generation platform for which we have publicly available statistics. Compare: Adobe Creative Cloud, which consists of Adobe Photoshop and other graphic design and video editing software, including the generative AI tool Adobe Firefly, has 30 million users, according to Prodesigntools.
  • Approximately 80% of the images (i.e. 12.590 billion) were created using models, services, platforms, and applications based on Stable Diffusion, which is open source.

DALL-E 2​

In April 2022, OpenAI released its image-generation model, DALL-E 2. For the first few months, it was available by invitation only, and the company gradually expanded access until September 2022, when the tool became available to all users without any barriers. Then OpenAI reported that users were generating more than 2 million images per day with DALL-E 2. We don’t know for sure what time period OpenAI meant by this number, or if they took an average volume of images generated. We assume that this is an average, which means that approximately 916 million images have been generated on a single platform in 15 months.

Midjourney​

Another prominent generative AI model, Midjourney, went live in July 2022. According to Photutorial’s estimate, Midjourney’s Discord (the algorithm is only available through Discord) receives about 20 to 40 jobs per second, with 15 million registered users and 1.5-2.5 million active members at any given time. With this in mind, we used 30 jobs per second as an average number of images created and get up to 2.5 million images created daily. As a result, there are 964 million images that have been created with Midjourney since its launch.

Stable Diffusion​

Stable Diffusion, a text-to-image model behind the AI company Stability AI, was released in August 2022. So far, we have two official places to test Stable Diffusion: these are Dreamstudio and Stability AI’s space on Hugging Face. According to Emad Mostaque, CEO of Stability AI, Stable Diffusion has more than 10 million users across all channels. If we extrapolate the Midjourney’s numbers and trends that we have at hand, it turns out that through the official Stable Diffusion channels, users generate 2 million images on a daily basis, and in more than a year since the release, this number has reached 690 million images.

However, the most challenging part is that Stable Diffusion is an open-source model, which means that the amount of content created with this model is not limited to what was produced on the official spaces owned by Stability AI. We have multiple platforms, applications, and services built on top of Stable Diffusion technology. The total audience of all these entities is also quite large and incalculable, and the amount of content they produce on a daily basis is really hard to estimate. And it’s growing all the time.

To get at least an idea of the scale, we took a look at some of the most popular repositories, such as GitHub, HuggingFace, and Civitai, which together have as many as tens of thousands of Stable Diffusion-based models uploaded by their users. We then went back to the Midjourney case and applied its trends to Stable Diffusion models on these platforms. However, just before we hit “publish,” we received an email from the Civitai team with some valuable statistics about their platform that helped us make our estimates more precise and accurate. For example, the Civitai team shared that they have a total of 213,994,828 model downloads on their platform, while the top 10 most downloaded models account for 2% of the total weekly downloads.

As a result, we recalculated some of our estimates and found that more than 11 billion images have been created using models from these three repositories. If we add other popular models (such as Runway, which we count separately) and the official channels of Stability AI, the number of images created with Stable Diffusion increases to 12.590 billion, which represents 80% of all AI images created with text-to-image algorithms.

Adobe Firefly​

The last and most recent model from our research was published in March 2023. Then, Adobe revealed Firefly, a suite of generative AI models focused on visual content creation. Within 6 weeks of launch, users created more than 100 million assets. With Firefly’s integration into Adobe Photoshop in May 2023, the number of images has grown exponentially, given the number of people using Photoshop worldwide. In its latest press-release, Adobe shared its AI image statistics: the number of images created with Adobe Firefly has reached 1 billion just 3 months after launch.


In total, more than 15 billion AI-created images have been generated using Stable Diffusion, Adobe Firefly, Midjourney, and DALLE-2. That’s more than Shutterstock’s entire library of photos, vectors, and illustrations, and one-third of the number of images ever uploaded to Instagram.
AI image statistics: amount of content across platforms vs. AI-created images
AI image statistics: time it took to reach 15 billion

Limitations​

To sum up, our exploration of the realm of AI image statistics based on available data, and extrapolations has shed light on the scope of this phenomenon. However, it’s important to acknowledge the limitations of our research.

While we’ve strived to provide insights into the volume of images generated by AI algorithms, obtaining accurate and up-to-date AI image statistics remains a challenge. In addition, the rapidly evolving nature of AI technology, combined with the increasing breadth of models and applications, makes it difficult to keep track of current data, which becomes outdated on a daily basis. As the landscape continues to evolve, we’re committed to updating these statistics as new information becomes available. Feel free to share your feedback along with any data, insights, or observations you may have to help us keep AI image statistics current and accurate.

Sources​

Civitai
EarthWeb, How Many Pictures Are on Instagram in 2023? (Photo Statistics), 2023
Photutorial, Midjourney statistics (Updated: August 2023), 2023
Photutorial, Shutterstock statistics (2023): Revenue, subscribers, market share, & more, 2023
Adobe, Adobe Firefly Expands Globally, Supports Prompts in Over 100 Languages, 2023
Social Shepherd, 22 Essential Pinterest Statistics You Need to Know in 2023, 2023
Bloomberg, Stability AI Raises Seed Round at $1 Billion Value, 2022
OpenAI, DALL·E now available without waitlist, 2022
Insider, Facebook Users Are Uploading 350 Million New Photos Each Day, 2013
Fstoppers, [Stats] How Many Photos Have Ever Been Taken?, 2012
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229

US Copyright Office denies protection for another AI-created image​

By Blake Brittain

September 6, 20236:20 PM EDTUpdated 5 days ago

FILE PHOTO: Illustration shows AI (Artificial Intelligence) letters and computer motherboard

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration Acquire Licensing Rights
  • Summary
  • Midjourney-created "space opera" artwork not protectable, office says
  • Office has previously rejected copyrights for AI-generated work

Sept 6 (Reuters) - The U.S. Copyright Office has again rejected copyright protection for art created using artificial intelligence, denying a request by artist Jason M. Allen for a copyright covering an award-winning image he created with the generative AI system Midjourney.

The office said on Tuesday that Allen's science-fiction themed image "Theatre D'opera Spatial" was not entitled to copyright protection because it was not the product of human authorship.

The Copyright Office in February rescinded copyrights for images that artist Kris Kashtanova created using Midjourney for a graphic novel called "Zarya of the Dawn," dismissing the argument that the images showed Kashtanova's own creative expression. It has also rejected a copyright for an image that computer scientist Stephen Thaler said his AI system created autonomously.

Allen said on Wednesday that the office's decision on his work was expected, but he was "certain we will win in the end."

"If this stands, it is going to create more problems than it solves," Allen said. "This is going to create new and creative problems for the copyright office in ways we can't even speculate yet."

Representatives for Midjourney did not immediately respond to a request for comment on the decision on Wednesday.

Allen applied last September to register a copyright in "Theatre D'opera Spatial," an image evoking a futuristic royal court that won the Colorado State Fair's art competition in 2022. A Copyright Office examiner requested more information about Midjourney's role in creating the image, which had received national attention as the first AI-generated work to win the contest.


Allen told the office that he "input numerous revisions and text prompts at least 624 times to arrive at the initial version of the image" using Midjourney and altered it with Adobe Photoshop.

The office asked Allen to disclaim the parts of the image that Midjourney generated in order to receive copyright protection. It rejected Allen's application after he declined.

The office's Copyright Review Board affirmed the decision on Tuesday, finding the image as a whole was not copyrightable because it contained more than a minimal amount of AI-created material.


The office also rejected Allen's argument that denying copyrights for AI-created material leaves a "void of ownership troubling to creators."

Read more:
Humans vs. machines: the fight to copyright AI art
AI-created images lose U.S. copyrights in test for new technology
AI-generated art cannot receive copyrights, US court says

Reporting by Blake Brittain in Washington



Theatre D'opera Spatial
GgOFe7p.jpeg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229

Stable Signature: A new method for watermarking images created by open source generative AI​


October 6, 2023•
6 minute read

387117150_283299154583792_8671751040970644786_n.jpg


AI-powered image generation is booming and for good reason: It’s fun, entertaining, and easy to use. While these models enable new creative possibilities, they may raise concerns about potential misuse from bad actors who may intentionally generate images to deceive people. Even images created in good fun could still go viral and potentially mislead people. For example, earlier this year, images appearing to show Pope Francis wearing a flashy white puffy jacket went viral. The images weren’t actual photographs, but plenty of people were fooled, since there weren’t any clear indicators to distinguish that the content was created by generative AI.


At FAIR, we’re excited about driving continued exploratory research in generative AI, but we also want to make sure we do so in a manner that prioritizes safety and responsibility. Today, together with Inria, we are excited to share a research paper and code that details Stable Signature, an invisible watermarking technique we created to distinguish when an image is created by an open source generative AI model. Invisible watermarking incorporates information into digital content. The watermark is invisible to the naked eye but can be detected by algorithms—even if people edit the images. While there have been other lines of research around watermarking, many existing methods create the watermark after an image is generated.

More than 11 billion images have been created using models from three open source repositories, according to Everypixel Journal. In this case, invisible watermarks can be removed simply by deleting the line that generates the watermark.




386501092_619074386822387_6156723771081966318_n.jpg



While the fact that these safeguards exist is a start, this simple tactic shows there’s plenty of potential for this feature to be exploited. The work we’re sharing today is a solution for adding watermarks to images that come from open source generative AI models. We’re exploring how this research could potentially be used in our models. In keeping with our approach to open science, we want to share this research with the AI community in the hope of advancing the work being done in this space.

How the Stable Signature method works



Stable Signature closes the potential for removing the watermark by rooting it in the model with a watermark that can trace back to where the image was created.

Let’s take a look at how this process works with the below chart.




386655463_1051155306228897_7217726810106044762_n.jpg



Alice trains a master generative model. Before distributing it, she fine-tunes a small part of the model (called the decoder) to root a given watermark for Bob. This watermark may identify the model version, a company, a user, etc.

Bob receives his version of the model and generates images. The generated images will carry the watermark of Bob. They can be analyzed by Alice or third parties to see if the image was generated by Bob, who used the generative AI model.

We achieve this in a two-step process:



  • First, two convolutional neural networks are jointly trained. One encodes an image and a random message into a watermark image, while the other extracts the message from an augmented version of the watermark image. The objective is to make the encoded and extracted messages match. After training, only the watermark extractor is retained.
  • Second, the latent decoder of the generative model is fine-tuned to generate images containing a fixed signature. During this fine-tuning, batches of images are encoded, decoded, and optimized to minimize the difference between the extracted message and the target message, as well as to maintain perceptual image quality. This optimization process is fast and effective, requiring only a small batch size and a short time to achieve high-quality results.



Assessing the performance of Stable Signature



We know that people enjoy sharing and reposting images. What if Bob shared the image he created with 10 friends, who each then shared it with 10 more friends? During this time, it’s possible that someone could have altered the image, such as by cropping it, compressing it, or changing the colors. We built Stable Signature to be robust to these changes. No matter how a person transforms an image, the original watermark will likely remain in the digital data and can be traced back to the generative model where it was created.



386659636_700172301983869_8256737163893264734_n.jpg



During our research, we discovered two major advantages of Stable Signature over passive detection methods. First, we were able to control and reduce the generation of false positives, which occur when we mistake an image produced by humans for one generated by AI. This is crucial given the prevalence of non-AI-generated images shared online. For example, the most effective existing detection method can spot approximately 50% of edited generated images but still generates a false positive rate of approximately 1/100. Put differently, on a user-generated content platform receiving 1 billion images daily, around 10 million images would be incorrectly flagged to detect just half of the generated ones. On the other hand, Stable Signature detects images with the same accuracy at a false positive rate of 1e-10 (which can be set to a specific desired value). Moreover, our watermarking method allows us to trace images from various versions of the same model—a capability not possible with passive techniques.



How Stable Signature works with fine-tuning



A common practice in AI is to take foundational models and fine-tune them to handle specific use cases that are sometimes even tailored to one person. For example, a model could be shown images of Alice’s dog, and then Alice could ask for the model to generate images of her dog at the beach. This is done through methods like DreamBooth, Textual Inversion, and ControlNet. These methods act at the latent model level, and they do not change the decoder. This means that our watermarking method is not affected by these fine-tunings.

Overall, Stable Signature works well with vector-quantized image modeling (like VQGANs) and latent diffusion models (like Stable Diffusion). Since our method doesn’t modify the diffusion generation process, it’s compatible with the popular models mentioned above. We believe that, with some adaptation, Stable Signature could also be applied to other modeling methods.


Providing access to our technology



The use of generative AI is advancing at a rapid pace. Currently, there aren’t any common standards for identifying and labeling AI-generated content across the industry. In order to build better products, we believe advancements in responsibility research, like the work we’re sharing today, must exist in parallel.

We’re excited to share our work and give the AI research community access to these tools in the hope of driving continued collaboration and iteration. While it’s still early days for generative AI, we believe that by sharing our research, engaging with the community, and listening to feedback, we can all work together to ensure this impressive new technology is built, operated, and used in a responsible way.

The research we’re sharing today focuses on images, but in the future we hope to explore the potential of integrating our Stable Signature method across more generative AI modalities. Our model works with many popular open source models, however there are still limitations. It does not scale to non-latent generative models, so it may not be future proof to new generation technologies. By continuing to invest in this research, we believe we can chart a future where generative AI is used responsibly for exciting new creative endeavors.

This blog post reflects the work of Matthijs Douze and Pierre Fernandez. We'd like to acknowledge the contributions of Guillaume Couairon, Teddy Furon, and Hervé Jégou to this research.


Read the paper

Get the code
 

Secure Da Bag

Veteran
Joined
Dec 20, 2017
Messages
37,005
Reputation
19,715
Daps
118,107
It’s democratized art for the non artistic. It’s also lessened the value of art as a byproduct that anyone can make art.

The art that's created is sometimes amazing. But most of it is still meh. Unless you're prompt-engineering gawd, it does some weird shyt.
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
23,179
Reputation
3,920
Daps
86,233
Reppin
New York City
The art that's created is sometimes amazing. But most of it is still meh. Unless you're prompt-engineering gawd, it does some weird shyt.
I mean that is most art. Art should elicit an emotion response which is subjective. I don’t know if AI can do that on demand, whereas a great Human artist given time can just knock out amazing pieces of inspiration. However, just like a near infinite amount of monkeys on a type writers can eventually type out Shakespeare I am pretty sure AI art has and will continue to make sublime pieces of art.

The scary part of all this is I look at AI more like NES nintendos than say the original IPOD. Apple had to add additional functionality to make a IPod into a IPhone and then into a IPad. Nintendo systems just became stronger so you go from a NES which is where AI is now to a super NES, N64, Wii to a Switch all while other systems push the hardware even harder elsewhere. In the future AI iterations are just going to be stronger and able to do what we see now significant better. Graphically we might all have a world class artist at our finger tips in a few years.

In many ways AI just replaces the skilled tech user. There are people who have had whole careers based on their knownledge of photoshop. Soon you can tell a program to do what that person did in a few sentences and it will do it much quicker, for cheaper. Being able to give feedback that the AI can use to edit specific details as wanted is the next step and when that happens I think we see art AI become more artistic as users taste regarding edits will determine output.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229
I mean that is most art. Art should elicit an emotion response which is subjective. I don’t know if AI can do that on demand, whereas a great Human artist given time can just knock out amazing pieces of inspiration. However, just like a near infinite amount of monkeys on a type writers can eventually type out Shakespeare I am pretty sure AI art has and will continue to make sublime pieces of art.

The scary part of all this is I look at AI more like NES nintendos than say the original IPOD. Apple had to add additional functionality to make a IPod into a IPhone and then into a IPad. Nintendo systems just became stronger so you go from a NES which is where AI is now to a super NES, N64, Wii to a Switch all while other systems push the hardware even harder elsewhere. In the future AI iterations are just going to be stronger and able to do what we see now significant better. Graphically we might all have a world class artist at our finger tips in a few years.

In many ways AI just replaces the skilled tech user. There are people who have had whole careers based on their knownledge of photoshop. Soon you can tell a program to do what that person did in a few sentences and it will do it much quicker, for cheaper. Being able to give feedback that the AI can use to edit specific details as wanted is the next step and when that happens I think we see art AI become more artistic as users taste regarding edits will determine output.

soon? it can be done right now!










 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
23,179
Reputation
3,920
Daps
86,233
Reppin
New York City
So @bnew you’re more of an expert on this than most, do you know of any AI companies a regular person can invest in? Right now the tech bros seem to be the only ones getting in on the ground floor.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229
So @bnew you’re more of an expert on this than most, do you know of any AI companies a regular person can invest in? Right now the tech bros seem to be the only ones getting in on the ground floor.

i'm not an expert at all:hubie:, just an enthusiast. :leon:

during the gold rush the people who sold shovels and axe picks profited greatly. well GPU's specifically the ones made by Nvidia are in high demand. it's possible the stock might be overbought already but you never no. I don't have any AI investments. you'll have to do research on the hardware needed to train and run AI models and who sells it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,278
Reputation
7,364
Daps
134,229


1/2
Emad Mostaque: last summer it took 20 seconds to generate an image, now we can do 300 images per second and over 1000 with the new Nvidia chips

2/2
Source:
 
Top