-
Surgeon Who Beat Cancer 3 Times Debunks Alternative Therapies—’No Evidence’ - 3 mins ago
-
Bournemouth vs. Arsenal lineup, prediction, picks: Where to watch Premier League live stream, TV channel, odds - 5 mins ago
-
Celta Vigo vs Real Madrid Prediction: La Liga Primera - 6 mins ago
-
‘It was a frightening time to be a woman’ - 8 mins ago
-
Weak La Niña and dry conditions likely in the Southwest this winter - 18 mins ago
-
Donald Trump Urges Women To Get ‘Fat Pig’ Husbands To Vote Early - 20 mins ago
-
How a 102-year-old woman is defying the odds as a musician, volunteer and more - 24 mins ago
-
iQOO 13 Design Revealed in Leaked Live Images; Could Feature Narrow Bezels, Flat Edges - 27 mins ago
-
NFL Week 7 picks, schedule, odds, injuries, fantasy tips - 29 mins ago
-
Bayer Leverkusen vs Frankfurt Prediction: Bundesliga - 30 mins ago
Google Photos Could Reportedly Show AI Image Credits to Protect Users From Instances of Deepfakes
Google Photos is reportedly adding a new functionality that will allow users to check whether an image was generated or enhanced using artificial intelligence (AI) or not. As per the report, the photo and video sharing and storage service is getting new ID resource tags which will reveal the AI info of the image as well as the digital source type. The Mountain View-based tech giant is likely working on this feature to reduce the instances of deepfakes. However, it is unclear how the information will be displayed to users.
Google Photos AI Attribution
Deepfakes have emerged as a new form of digital manipulation in recent years. These are the images, videos, audio files, or other similar media which have either been digitally generated using AI or enhanced using various means to spread misinformation or mislead people. For instance, actor Amitabh Bachchan recently filed a lawsuit against the owner of a company for running deepfake video ads where the actor was seen promoting the products of the company.
According to an Android Authority report, a new functionality in the Google Photos app will allow users to see if an image in their gallery was created using digital means. The feature was spotted in the Google Photos app version 7.3. However, it is not an active feature, meaning those on the latest version of the app will not be able to see this just yet.
Within the layout files, the publication found new strings of XML code pointing towards this development. These are ID resources, which are identifiers assigned to a specific element or resource in the app. One of them reportedly contained the phrase “ai_info”, which is believed to refer to the information added to the metadata of the images. This section should be labelled if the image was generated by an AI tool which adheres to transparency protocols.
Other than that, the “digital_source_type” tag is believed to refer to the name of the AI tool or model that was used to generate or enhance the image. These could include names such as Gemini, Midjourney, and others.
However, it is currently uncertain how Google wants to display this information. Ideally, it could be added to the Exchangeable Image File Format (EXIF) data embedded within the image so there are fewer ways to tamper with it. But a downside of that would be that users will not be able to readily see this information unless they go to the metadata page. Alternatively, the app could add an on-image badge to indicate AI images, similar to what Meta did on Instagram.
Source link