Harwood ND AI business



Duckslayer100

Founding Member
Founding Member
Joined
Aug 11, 2015
Posts
4,644
Likes
253
Points
348
Location
ND's Flatter Half
should become law that all AI generated photos must contain a watermark indicating such

without it, the high trust social fabric is erased

online anyway

Definitely, except who police those who wish to control us and the narrative?

"Laws for thee, but not for me." -- Your faithless and corrupt political masses
 

Davy Crockett

Founding Member
Founding Member
Thread starter
Joined
Apr 22, 2015
Posts
15,563
Likes
2,857
Points
783
Location
Boondocks
should become law that all AI generated photos must contain a watermark indicating such

without it, the high trust social fabric is erased

online anyway

Off topic but, remember that guy from the news a while back who you mentioned didn't have any history online except for NDcourts ?

Since then I saw a company ad with claims they can erase every last detail about someone online and block any future data.
 


guywhofishes

Founding Member
Founding Member
Joined
Apr 21, 2015
Posts
30,272
Likes
9,107
Points
1,133
Location
Faaargo, ND
it already is
Not all AI-generated images and videos automatically include watermarks indicating their AI origins, as there is no universal legal requirement mandating this across all tools or jurisdictions as of 2026. However, many major AI providers voluntarily embed watermarks—either visible or invisible—to promote transparency and help detect synthetic content. Here's a breakdown:Types of Watermarks in AI Content
  • Visible Watermarks: These are overt indicators, such as logos, text, or icons overlaid on the content. For example:
    • OpenAI's DALL-E adds a small pattern of colored squares in the corner of generated images.
    • Some tools place faint text like "AI-generated" or a company logo.
    • These can often be cropped out or edited away easily.
  • Invisible Watermarks: These are embedded subtly into the pixels, audio tracks, or file metadata without altering the visible appearance. They can only be detected by specialized tools or algorithms. Examples include:
    • Google's SynthID, which embeds undetectable markers in images, videos, audio, and text generated by tools like Gemini or Veo. You can check for SynthID using Google's detection tools.
    • Meta's systems (e.g., for images on Facebook/Instagram and VideoSeal for videos) use IPTC metadata and invisible patterns aligned with standards like C2PA (Coalition for Content Provenance and Authenticity).
    • Other providers like Microsoft, Adobe, Midjourney, and Shutterstock often include similar metadata or pixel-based markers.
Variations by Content Type
  • Images: Watermarking is more common and advanced here. Major generators (e.g., from OpenAI, Google, Adobe) typically include invisible markers, and some add visible ones. However, open-source or smaller tools may not, and watermarks can be stripped by editing software.
  • Videos: Less consistently implemented than for images, but growing. Google's SynthID and Meta's VideoSeal embed markers in video frames. Audio/video deepfakes are harder to watermark reliably, and not all generators (especially unregulated ones) apply them.
Legal and Industry Context
  • In the US, California's AI Transparency Act (effective January 1, 2026) requires large AI providers (those with over 1 million monthly users) to embed invisible watermarks with details like the provider's name, AI version, and creation timestamp. They must also offer free detection tools and optional visible labels. This applies to images, videos, and audio but isn't nationwide or global.
  • Internationally, standards like C2PA and IPTC are adopted by companies for voluntary labeling, but enforcement varies. Bad actors can use non-compliant tools to generate unmarked content.
  • Watermarks aren't foolproof: They can be removed, degraded by compression/resizing, or forged. Detection often requires tools like Hive Moderation, Google's Gemini checker, or open-source AI detectors.
If you're dealing with specific content, upload it to a detection tool (e.g., via Google or Meta) to scan for markers. For general verification, look for inconsistencies like unnatural details (e.g., extra fingers in images) as additional clues, though these aren't reliable as AI improves.
 

Recent Posts

Friends of NDA

Top Posters of the Month

  • This month: 280
  • This month: 123
  • This month: 91
  • This month: 73
  • This month: 72
  • This month: 71
  • This month: 58
  • This month: 48
  • This month: 44
  • This month: 44
Top Bottom