Harwood ND AI business



Duckslayer100

Founding Member
Founding Member
Joined
Aug 11, 2015
Posts
4,644
Likes
253
Points
348
Location
ND's Flatter Half
should become law that all AI generated photos must contain a watermark indicating such

without it, the high trust social fabric is erased

online anyway

Definitely, except who police those who wish to control us and the narrative?

"Laws for thee, but not for me." -- Your faithless and corrupt political masses
 

Davy Crockett

Founding Member
Founding Member
Thread starter
Joined
Apr 22, 2015
Posts
15,573
Likes
2,862
Points
808
Location
Boondocks
should become law that all AI generated photos must contain a watermark indicating such

without it, the high trust social fabric is erased

online anyway

Off topic but, remember that guy from the news a while back who you mentioned didn't have any history online except for NDcourts ?

Since then I saw a company ad with claims they can erase every last detail about someone online and block any future data.
 


guywhofishes

Founding Member
Founding Member
Joined
Apr 21, 2015
Posts
30,272
Likes
9,109
Points
1,133
Location
Faaargo, ND
it already is
Not all AI-generated images and videos automatically include watermarks indicating their AI origins, as there is no universal legal requirement mandating this across all tools or jurisdictions as of 2026. However, many major AI providers voluntarily embed watermarks—either visible or invisible—to promote transparency and help detect synthetic content. Here's a breakdown:Types of Watermarks in AI Content
  • Visible Watermarks: These are overt indicators, such as logos, text, or icons overlaid on the content. For example:
    • OpenAI's DALL-E adds a small pattern of colored squares in the corner of generated images.
    • Some tools place faint text like "AI-generated" or a company logo.
    • These can often be cropped out or edited away easily.
  • Invisible Watermarks: These are embedded subtly into the pixels, audio tracks, or file metadata without altering the visible appearance. They can only be detected by specialized tools or algorithms. Examples include:
    • Google's SynthID, which embeds undetectable markers in images, videos, audio, and text generated by tools like Gemini or Veo. You can check for SynthID using Google's detection tools.
    • Meta's systems (e.g., for images on Facebook/Instagram and VideoSeal for videos) use IPTC metadata and invisible patterns aligned with standards like C2PA (Coalition for Content Provenance and Authenticity).
    • Other providers like Microsoft, Adobe, Midjourney, and Shutterstock often include similar metadata or pixel-based markers.
Variations by Content Type
  • Images: Watermarking is more common and advanced here. Major generators (e.g., from OpenAI, Google, Adobe) typically include invisible markers, and some add visible ones. However, open-source or smaller tools may not, and watermarks can be stripped by editing software.
  • Videos: Less consistently implemented than for images, but growing. Google's SynthID and Meta's VideoSeal embed markers in video frames. Audio/video deepfakes are harder to watermark reliably, and not all generators (especially unregulated ones) apply them.
Legal and Industry Context
  • In the US, California's AI Transparency Act (effective January 1, 2026) requires large AI providers (those with over 1 million monthly users) to embed invisible watermarks with details like the provider's name, AI version, and creation timestamp. They must also offer free detection tools and optional visible labels. This applies to images, videos, and audio but isn't nationwide or global.
  • Internationally, standards like C2PA and IPTC are adopted by companies for voluntary labeling, but enforcement varies. Bad actors can use non-compliant tools to generate unmarked content.
  • Watermarks aren't foolproof: They can be removed, degraded by compression/resizing, or forged. Detection often requires tools like Hive Moderation, Google's Gemini checker, or open-source AI detectors.
If you're dealing with specific content, upload it to a detection tool (e.g., via Google or Meta) to scan for markers. For general verification, look for inconsistencies like unnatural details (e.g., extra fingers in images) as additional clues, though these aren't reliable as AI improves.
 

Davy Crockett

Founding Member
Founding Member
Thread starter
Joined
Apr 22, 2015
Posts
15,573
Likes
2,862
Points
808
Location
Boondocks
will Harwood ND AI center make local homeowners electricity prices rise ?











Utility officials and the developer state that the
Harwood AI data center will not cause electricity prices to rise for local homeowners. In fact, they argue it could help stabilize or even lower rates by spreading fixed utility costs across a larger base of users.
InForum
InForum +4

Key Arguments for Rate Stability
  • Fixed Cost Sharing: Large energy users like data centers pay a significant share of the utility's fixed infrastructure costs (transmission lines, substations). By adding this massive load, the Cass County Electric Cooperative can spread these expenses over more "units" of electricity, potentially reducing the burden on residential members.
  • Infrastructure Funding: The developer, Applied Digital, is responsible for paying for all power infrastructure specifically needed for the facility.
  • Proven Track Record: A similar Applied Digital facility in Ellendale, ND, reportedly saved ratepayers over $5 million in 2023 and 2024 by utilizing infrastructure more efficiently.
  • Backup Generation: The center will have complete backup diesel generation, allowing it to go offline during "max gen events" (extreme grid stress) so it does not compete with residents for power during shortages.
    InForum
    InForum +6

Counter-Arguments and Risks
  • Market Pressures: While the data center itself may not directly cause a hike, utility leaders have noted that rates could still rise due to independent factors like inflation, rising labor costs, and the need to replace aging power plants.
  • Grid Strain Concerns: Some residents and critics argue that the sheer scale of the center's demand (approximately 280 megawatts, or roughly 20% of the state's production) could eventually strain the regional grid, leading to higher long-term infrastructure costs.
  • National Trends: On a broader scale, some reports from Bloomberg and NPR suggest that the national "AI data boom" is driving up demand so quickly that utilities everywhere are scrambling to build new generation, which often results in higher bills for all customers.
    MPR News
    MPR News +5
If you'd like to dive deeper, I can look into:
  • The specific tax benefits Harwood expects to receive from the $3 billion project.
  • The current status of the legal battle between Fargo and Harwood over the annexation of the land.
  • How water usage and noise levels at the Ellendale site compare to the plans for Harwood.
 

Lycanthrope

Founding Member
Founding Member
Joined
May 6, 2015
Posts
7,226
Likes
2,496
Points
758
Location
Bismarck
Not all AI-generated images and videos automatically include watermarks indicating their AI origins, as there is no universal legal requirement mandating this across all tools or jurisdictions as of 2026. However, many major AI providers voluntarily embed watermarks—either visible or invisible—to promote transparency and help detect synthetic content. Here's a breakdown:Types of Watermarks in AI Content
  • Visible Watermarks: These are overt indicators, such as logos, text, or icons overlaid on the content. For example:
    • OpenAI's DALL-E adds a small pattern of colored squares in the corner of generated images.
    • Some tools place faint text like "AI-generated" or a company logo.
    • These can often be cropped out or edited away easily.
  • Invisible Watermarks: These are embedded subtly into the pixels, audio tracks, or file metadata without altering the visible appearance. They can only be detected by specialized tools or algorithms. Examples include:
    • Google's SynthID, which embeds undetectable markers in images, videos, audio, and text generated by tools like Gemini or Veo. You can check for SynthID using Google's detection tools.
    • Meta's systems (e.g., for images on Facebook/Instagram and VideoSeal for videos) use IPTC metadata and invisible patterns aligned with standards like C2PA (Coalition for Content Provenance and Authenticity).
    • Other providers like Microsoft, Adobe, Midjourney, and Shutterstock often include similar metadata or pixel-based markers.
Variations by Content Type
  • Images: Watermarking is more common and advanced here. Major generators (e.g., from OpenAI, Google, Adobe) typically include invisible markers, and some add visible ones. However, open-source or smaller tools may not, and watermarks can be stripped by editing software.
  • Videos: Less consistently implemented than for images, but growing. Google's SynthID and Meta's VideoSeal embed markers in video frames. Audio/video deepfakes are harder to watermark reliably, and not all generators (especially unregulated ones) apply them.
Legal and Industry Context
  • In the US, California's AI Transparency Act (effective January 1, 2026) requires large AI providers (those with over 1 million monthly users) to embed invisible watermarks with details like the provider's name, AI version, and creation timestamp. They must also offer free detection tools and optional visible labels. This applies to images, videos, and audio but isn't nationwide or global.
  • Internationally, standards like C2PA and IPTC are adopted by companies for voluntary labeling, but enforcement varies. Bad actors can use non-compliant tools to generate unmarked content.
  • Watermarks aren't foolproof: They can be removed, degraded by compression/resizing, or forged. Detection often requires tools like Hive Moderation, Google's Gemini checker, or open-source AI detectors.
If you're dealing with specific content, upload it to a detection tool (e.g., via Google or Meta) to scan for markers. For general verification, look for inconsistencies like unnatural details (e.g., extra fingers in images) as additional clues, though these aren't reliable as AI improves.
pretty sure he was referring to the high trust society part...
 


Recent Posts

Friends of NDA

Top Posters of the Month

  • This month: 340
  • This month: 141
  • This month: 98
  • This month: 95
  • This month: 80
  • This month: 77
  • This month: 76
  • This month: 54
  • This month: 54
  • This month: 44
Top Bottom