According to a recent report, fake prnography is continuously appearing as the top results when searched on Google and Bing.
According to a report by NBC News on Thursday, major search engines such as Google and Bing have made it effortless for nonconsensual deepfake p*rnography to be easily accessible.
This has been done by featuring it as the top results in search queries.
The use of deepfakes to place the faces of actual women, usually famous individuals, onto adult performers’ bodies has sparked controversy.
With the help of advancements in generative AI, there is a growing underground market for deepfake p*rnography that can easily be found through a simple Google search.
As previously mentioned by NBC News, this phenomenon has been dubbed the “AI Mr. Deep Fake Economy,” and it is possible to download such content using platforms like Google, Visa, and Mastercard.
The issue was brought to light by NBC News when they disabled safe search and merged the names of 36 well-known female stars with explicit search terms such as “deepfakes,” “deepfake p*rn,” and “fake nudes.”
As a result, Bing displayed links to deepfake videos in 35 of the top search results, while Google did so in 34 instances.
Additionally, Bing also presented “fake nude images of former teenage female actors from Disney Channel” by using photos where the actors appeared to be underage.
According to a spokesperson from Google, the company acknowledges the distressing nature of this content for those impacted by it and is actively implementing measures to enhance the safety of its Search feature.
As stated by a representative from Google, the appearance of controversial material can be attributed to the fact that “Google indexes content that can be found on the internet,” similar to any other search engine.
However, although searches using keywords such as “deepfake” may consistently yield results, Google has “actively” developed “ranking systems to prevent shocking individuals with unforeseen, harmful, or explicit content.
At present, the sole method for eliminating nonconsensual deepfake p*rnography from Google search results is for the victim to personally submit a form or have an “authorized representative” do so.
This form necessitates that the victim fulfills three criteria, proving that they are clearly identifiable in the deepfake, the visual content is fabricated and wrongly portrays them as being nude or in a sexually explicit situation, and that the imagery was shared without their consent.
Although this provides victims with a means to take action and remove the content, experts worry that search engines should take further steps to decrease the abundance of deepfake p*rnography on the internet, as it is currently increasing at an alarming pace.
The growing concern now affects ordinary individuals and even minors, rather than just famous figures.
In June, experts in child safety discovered that there were numerous online exchanges of lifelike AI-generated child sexual images, around the same time the FBI issued a warning about the escalating use of AI-generated deepfakes in schemes of sextortion.
Not only is nonconsensual deepfake p*rnography being exchanged on the black market through the internet.
But in November, an investigation was initiated by New Jersey authorities when AI software was used by high school students to produce and distribute fabricated naked pictures of their female peers.
In response to the apparent lack of action from tech companies to address the issue of deepfakes, several states have implemented laws that make the distribution of deepfake p*rnography a criminal offense.
For instance, in July, Virginia amended an existing law that criminalizes revenge p*rn to also include any “falsely created videographic or still image.”
Similarly, in October, New York passed a law specifically targeting deepfake p*rn and imposing a $1,000 fine and up to one year in jail for violators.
Additionally, Congress has recently introduced a bill that would make it a crime to spread deepfake p*rn.
According to NBC News, Google stated that its search tools do not permit manipulated media or sexually explicit content.
However, the outlet’s investigation appears to contradict this claim.
NBC News revealed that Google’s Play app store contains an application that was previously advertised for generating deepfake p*rn.
Despite having a policy against “apps that promote or perpetuate misleading or deceptive visual, video, and/or written material.”
This implies that Google’s attempts to address misleading imagery may not be consistent.
According to a statement from Google to Ars, the company will be implementing stricter measures against apps in the Play Store that contain AI-generated content that is restricted.
These measures, outlined in a generative AI policy, will go into effect on January 31 and will require all apps to adhere to developer policies that prohibit the use of AI-generated content that is deceptive or promotes the exploitation or abuse of children.
This policy can be found in the Google Play Developer Policies section under the topic of “Restricted Content.”
According to experts, Google’s failure to actively monitor for abuse has allowed it and other search engines to become popular platforms for individuals seeking to participate in deepfake harassment campaigns.
According to a Google spokesperson, the company is currently working on implementing stronger protective measures to eliminate the requirement for known victims to individually request the removal of content.
This is specifically aimed at creating broader safeguards.
According to a statement from Microsoft’s representative to Ars, they are currently reviewing our inquiry for a response.
Any updates provided by Microsoft will be included in this report.
Previously, Brad Smith, the President of Microsoft, expressed his concern regarding the dangers of AI.
He specifically mentioned that out of all the threats, deepfakes are the ones that worry him the most.
However, he seems to be more worried about deepfakes being used for “foreign cyber influence operations” rather than just for creating fake p*rn.