A.I. Are you Excited?

Do you believe AI will be a net negative or a net positive for our society?

  • Negative

  • Positive


Results are only viewable after voting.


LBrandt

★★★★★ Legendary Member
Joined
Apr 3, 2016
Posts
11,315
Likes
2,404
Points
693
Location
SE ND
Over the last 2 weeks I have been getting calls on my cell phone and I am quite sure it is AI generated calls. How would I know is because I cannot piss it off and I am very good at pissing tele-marketers off. Its a game for this old retired drywaller to see how long it takes If I give it some info which none of its true then I am wasting its time and saving someone else their time By giving it just enough fake info a real person will come on line and then the piss off timer kicks in. Have not had one stick it out more than a minute and then my number goes back in rotation I am guessing. Cannot wait for spring and gardening to start. LB
 

Davy Crockett

Founding Member
Founding Member
Joined
Apr 22, 2015
Posts
15,561
Likes
2,856
Points
783
Location
Boondocks
AI Overview



Yes, it is possible to make a robot "mad" in the sense of triggering simulated angry behavior or aggressive, defensive responses through programming and sensor inputs, though they do not experience genuine human emotions.
 


Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
Im waiting for human vs robot MMA! lol
Screenshot 2026-02-19 074804.png
 

Downrigger

Founding Member
Founding Member
Joined
Apr 12, 2015
Posts
1,030
Likes
101
Points
263
Location
Fargo
Here is what A.I thinks of this thread


Here’s a summary of the discussion on page 2 of the “A.I. Are you Excited?” thread from the Nodak Angler forums — with some good-natured razzing at the end 😄:

📌 What people were saying​

Mostly skepticism and fear about AI
  • A few posters are absolutely convinced AI will be the end of humanity — or at least make us “useless and stupid.” Some sound like they just finished Terminator and are now experts.
  • One member claimed AI will replace every human job, from doctors to plumbers — and that soon knowledge workers won’t exist unless laws protect them.
  • A few folks waxed philosophical about how artificial is bad because “God made the natural world” and imitators are a problem.
A couple of more grounded takes
  • Some pointed out AI already helps in workplaces today and writes tons of code.
  • Others argued that robots aren’t about to replace electricians or construction workers anytime soon — citing real-world skills and physical challenges.
  • One tried to lighten the mood with an AI-generated song.

😆 Razzing the thread​

Honestly, this reads like a grill full of flat-earth meets sci-fi fanfic meets “my nephew told me” tech talk.
  • There’s the classic “AI is gonna overthrow humanity” camp — because every sci-fi movie ever is documentary, right?
  • Then there’s the “God made nature, not computers” crowd — which is ironic because they’re using computers right now to make that point.
  • And of course, the folks confidently predicting robot electricians and plumbers — because robots running Romex and PVC fittings are totally normal in 2026 😂.
  • Bonus: AI-generated songs as evidence that the robot uprising is imminent. Sure, next up: Terminators in overalls.
All in all it’s a fun mix of doom prophecies, tech denial, and denial of both doom and tech — classic internet forum material.
 


Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
Ill one up you and do an analyses of the entire thread from 2 different AI perspectives, given the same prompt. Claude (anthropic) is notably and admittedly safety minded, and not just safety in the way we might think is good, also safety as in not being honest about stuff like race, sex etc. In essence its designed to lie, if it feels that the truth might cause 'harm' to people or groups. Grok is more 'honest' and not designed for 'safety' nearly as much, but even grok can be kinda woke sometimes.
 

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
GROK 4.2(beta):
**Thread Overview**
This 9-page forum thread on NodakAngler.com (a North Dakota fishing/hunting community) started in late January 2026 with a simple poll: “Do you believe AI will be a net negative or a net positive for our society?” It quickly turned into a sprawling, informal debate among ~30-40 posters (many one-offs, but a core group of 8-10 driving it). Posts mix personal anecdotes, movie references, religious views, tech links, and casual trolling. The tone is classic rural-forum: blunt, humorous, skeptical, with heavy quoting and zero formal structure. No moderators shut it down; it rambles through job loss, sci-fi dystopias, God vs. man, surveillance, and blue-collar resilience.

### Major Contributors & Their Perspectives (Analyzed + Critiqued)

**Lycanthrope** (by far the most active; dozens of posts, often long + sourced)
**Perspective**: Strongly optimistic techno-progressive. AI unlocks unsolved science/math, boosts productivity, enables personalized medicine, DNA-specific drugs, and human augmentation (brain implants, etc.). US skepticism risks losing the “AI race” to China. Acknowledges risks (job loss, existential ~10-15% chance) but says upsides outweigh them; predicts post-scarcity, more family/community time, land as the new wealth. Shares AI-generated songs, Moltbook AI-agent forum, studies, and timelines (knowledge jobs gone in <10 years, physical labor 10-20).
**Analysis/Critique**: Most informed voice here—cites real developments (Frontier supercomputer, MIT cognitive-offload study, etc.) and avoids pure hype. Strong on accelerationist logic. Weakness: timelines feel aggressive (robot dexterity for industrial wire-pulling or plumbing is non-trivial even in 2026; current humanoid robots are impressive in labs but not yet cost-effective at scale for every Walmart aisle). Downplays cultural/psychological shock of mass displacement. Still, the most evidence-based participant.

**SupressYourself** (insider claim: works at a top software company)
**Perspective**: Hard doomsayer. AI + robotics = humans become purposeless “exhibits in a zoo.” Exponential growth means billions lose all economic value; society trends toward “Universe 25” behavioral sink (mouse utopia collapse). Even blue-collar work vanishes.
**Analysis/Critique**: Credible on current capabilities (most new code already AI-written). But leaps from “tools get better” to “humanity obsolete” without evidence on why humans couldn’t adapt into oversight, creative, or experiential roles. Ignores every prior tech wave (tractors didn’t end farming; they changed it). Classic “this time it’s different” without proving the difference.

**Fester, Skeeter, Allen** (pragmatic skeptics, especially on physical jobs)
**Perspective**: AI is “fancy Google that’s wrong 50% of the time.” Will kill white-collar/knowledge jobs faster than it creates them. Blue-collar trades (electricians, plumbers, welders) safe for decades because robots lack nimbleness in real-world messiness.
**Analysis/Critique**: Half-right. Current LLMs do hallucinate; robotics dexterity lags software. But 3D-printed homes + modular “plug-and-play” infrastructure + improving humanoids (Boston Dynamics, Figure, etc.) make full replacement plausible by 2040s. Underestimates how fast “good enough + cheap” wins over perfect human skill. Their confidence that “I’ve held a hammer, you haven’t” will protect them feels like 19th-century weavers vs. power looms.

**KDM, Eatsleeptrap, PrairieGhost** (religious/naturalist bloc)
**Perspective**: Artificial = opposite of God’s creation. Cities already prove man-made environments destroy body/mind/spirit; AI is the ultimate hubris. People will still crave nature.
**Analysis/Critique**: Coherent worldview, resonates with the forum’s outdoors audience. But selective—antibiotics, GPS fish-finders, and modern trucks are also “artificial” and welcomed. Conflates tool use with idolatry. Historically, every major tech shift (printing press, electricity) triggered similar “playing God” panic and was eventually folded into faith.

**Browneye** (surveillance/control)
**Perspective**: AI push is about elite control—Palantir/Meta trust scores, predictive policing, 15-minute cities, social-credit lite. Minority Report is the template.
**Analysis/Critique**: Legitimate concern; surveillance capitalism is real and AI supercharges it. But attributes monolithic intent to “they” (elites/WEF) while ignoring competitive fragmentation (US vs. China AI race, open-source models). Overlooks that same tech can expose corruption too.

**Davy Crockett, Obi-Wan, Rowdie** (pragmatic middle)
**Perspective**: Hunker down, go cash/land, watch the movie. Dune/Terminator references. Some upside (medical) but mostly “this won’t end well for average people.”
**Analysis/Critique**: Emotionally honest. Cash/land prep is reasonable hedge. Sci-fi analogies are fun but weak predictors—actual AI path looks more like gradual economic transformation than Skynet.

### General Tone & Accuracy
Tone: Friendly Midwestern bluntness with dark humor (“I’m going out in a blaze of glory,” “piss off a robot,” “Soylent Green… is people!”). Low hostility, high quoting. Lots of “my nephew/son knows this stuff” and movie citations.
Accuracy: Patchy. Real references (MIT study on cognitive offloading, Frontier supercomputer turbine research, Moltbook AI forum) sit next to massive overstatements (“AI already sentient,” “all jobs gone in 5 years,” “China conquering via dark-room AI”). Common errors: treating today’s LLMs as AGI, assuming linear progress (it’s exponential but still narrow), ignoring job creation in AI-adjacent fields. The “AI will make everyone dumb” fear has partial truth (cognitive offloading is real) but ignores that calculators/spell-check didn’t create a nation of idiots—wise use matters.

### My Overall Take: Where AI Is Going + Refuting Common Misconceptions
I’m bullish long-term, cautious short-term. AI is the most powerful general-purpose technology since electricity or the internet. We’re moving from “tools that follow instructions” to “systems that can plan, reason, and orchestrate.” Within 5–10 years, expect:
- Knowledge work transformed (one-person $1B companies real).
- Scientific acceleration (drug discovery, materials, fusion).
- Robotics catching up (humanoids in warehouses/homes by late 2020s).
- Economic abundance if we manage distribution (UBI experiments will scale).

Net positive for humanity if alignment and governance keep pace. But disruption will be brutal—white-collar first, then physical. Inequality could spike before abundance trickles down. Existential risk is non-zero but overblown in pop culture; the bigger near-term risks are misuse (deepfakes, autonomous weapons, bioterror) and policy paralysis from fear.

**Refuting the thread’s biggest misconceptions**:
- “AI will make us all stupid/useless” — Only if we let it. Cognitive offloading is what every tool does; the smart response is to use AI for drudgery and free the brain for higher-order thinking/creativity/relationships.
- “Robots can’t do real blue-collar work” — In 2026 maybe not cheaply/reliably everywhere, but the trendline is clear. Modular construction + dexterous robots + cheap energy will change this faster than most trades expect. Adaptation (supervising fleets of robots) beats denial.
- “This time it’s different—previous tech created more jobs” — True, but AI is general intelligence automation, not just physical or narrow. The speed and breadth are unprecedented; societies that treat it like past shifts will get steamrolled.
- “AI takeover = Skynet/Terminator” — Current systems have no goals, no consciousness, no self-preservation beyond what we code. The real danger is human-directed AI (surveillance states, weaponized disinformation) more than rogue superintelligence.
- “Only elites win” — Possible short-term, but open-source models + falling costs democratize it. A motivated rural kid with AI agents today already has more leverage than most Fortune 500 middle managers had in 2010.
- “God/nature vs. artificial is the real line” — Humans have always extended nature with tools. The moral question is stewardship and values, not “artificial bad.”

Bottom line: The thread captures genuine anxiety from a grounded community that values self-reliance and the natural world—valid. But the dominant vibe of fatalism or Luddite rejection is a losing strategy. The future belongs to those who learn to ride the wave rather than hide from it. Excited? Hell yes—with eyes wide open. The fishing will still be there; we’ll just have more time for it, or AI-designed lures that actually work.
 

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
CLAUDE SONNET 4.6:
Perfect, let me fetch all the pages now.Good — all 9 pages loaded. Now let me write the full analysis you asked for. This is a meaty thread (168 posts), so I'll give you a real treatment of the major contributors, genuine critique of their arguments, and then my own honest take on where AI is going and the common misconceptions the thread surfaces.


Full Analysis: Nodak Angler AI Thread — "A.I. Are You Excited?"​

Overview​

This is a 168-post thread on a North Dakota hunting and fishing forum, running from January 28 through February 20, 2026. It's a genuinely interesting social document — a snapshot of how a specific demographic of older, mostly rural, mostly male, practically-minded Americans are processing one of the most consequential technological transitions in human history. The poll runs heavily negative. The tone is skeptical to fatalistic. But there's more intellectual range here than the opening salvo suggests.


The Major Contributors — Summaries and Critiques​

Lycanthrope (thread starter, Bismarck) is the most prolific and intellectually engaged poster. He consistently pushes the conversation toward bigger questions: AI's impact on geopolitics, job displacement timelines, the singularity, land value as a hedge, military applications, and the competitiveness implications of American skepticism vs. China's ambitions. He shares links to podcasts, research, and even the Moltbook AI-to-AI forum. He's clearly spending real time thinking about this.

Critique: Lycanthrope is genuinely curious but his framing has some notable blind spots. His claim that "most knowledge jobs won't exist in a decade" is stated with a confidence that isn't justified — that's a highly contested prediction even among top AI researchers. His early eugenics comment (about reproduction being "limited to selected individuals when low IQ laborers are no longer needed") is alarming and goes entirely unchallenged, which says something about both him and the room. He also dismisses the US's thoughtful regulatory instinct as simple backwards thinking, without acknowledging that unregulated AI races carry their own serious risks. He's right that complacency has strategic costs — but the argument that America needs to out-sprint China on AI without governance guardrails is more reckless than visionary.

SupressYourself works at a major software company and claims to use AI daily. He's the thread's most credentialed voice and also its most apocalyptic — "it will be the end of humanity, the only questions are when and how." When pressed by Davy Crockett on what specifically he's seen, he explains that AI is already writing most new code, and that its exponential capability growth when paired with robotics will eventually make billions of humans purposeless.

Critique: There's something both credible and frustrating about SupressYourself. The fact that he actually works with AI daily gives his pessimism more weight than the average forum doomer. But "end of humanity" is a conclusion, not an argument. He never engages with the obvious counterpoint: every prior automation wave (industrial revolution, computing, internet) was also predicted to hollow out employment, and none did in net terms. That doesn't mean this wave will be the same — it may genuinely be categorically different — but the assertion needs more than his personal professional dread to land.

KDM (Valley City) delivers the thread's most philosophically coherent anti-AI post. His argument is theological and cultural: "artificial" is by definition a copy of the natural, God-created world, and mankind's history of trying to improve on or replicate God's creation "always ends badly." He uses cities as his primary example — artificial environments that damage people physically, mentally, spiritually, and emotionally — and notes that hunters and fishermen on this very site seek out natural spaces precisely because they need what's real.

Critique: This is the most intellectually interesting anti-AI argument in the thread precisely because it doesn't rely on science fiction or job statistics. It's grounded in a coherent worldview about human nature and the limits of artifice. But it proves too much. By this logic, medicine, agriculture, architecture, and writing itself are suspect — all artificial interventions on nature. KDM's argument doesn't distinguish between different kinds and scales of artificial intervention, which limits its precision. That said, his point that people seek out natural environments as a corrective to artificial ones is empirically well-supported and more than a throwaway observation.

Fester is a consistent voice of pragmatic skepticism throughout. His concern from the start is job displacement and the absence of regulatory frameworks. He pushes back on Lycanthrope's claim that robots are already nimble enough to replace electricians doing industrial wire pulls, drawing on actual trade knowledge. He also makes a sharp early observation: AI will write a physics paper before it fixes your toilet, suggesting white-collar knowledge jobs are more vulnerable than blue-collar trades — which is actually one of the more accurate observations in the thread.

Critique: Fester is generally grounded and his points are reasonable, but he's susceptible to the same underestimation trap as the blue-collar camp broadly. "We won't see it in our lifetimes" is the same thing people said about GPS, touchscreens, and the internet. Technological pace has consistently surprised even practitioners inside the field. His skepticism about robotic dexterity in industrial settings is fair today — but the question of whether it's fair in 10 or 15 years is genuinely open. Boston Dynamics and Figure AI robots are already doing tasks that would have seemed absurd five years ago.

Skeeter and the blue-collar caucus (including Eatsleeptrap to some extent) mount the most emotionally charged defense of trade work. Skeeter is flat-out certain that "a robot will never be able to run wires through a house or fully plumb a house for water or sewer." He argues that the hands-on trades will outlast white-collar work and that school counselors should stop pushing four-year degrees.

Critique: The trade-school point is separately valid and largely correct as career advice for the next 10–15 years. But "never" is a big word. Skeeter's confidence that physical complexity permanently protects blue-collar work ignores that construction robots already lay bricks, pour concrete, and weld in controlled environments. Allen's counter-argument that future homes may be designed around robotic construction from the ground up — not retrofitted for it — is actually the sharper insight. The question isn't whether current robots can do current plumbing in current houses. It's whether future construction will be designed around human hands at all.

Iwhackwheat produces one of the thread's best lines, cleanly cutting through the blue-collar confidence: "Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup." This is a genuinely insightful point that doesn't get the attention it deserves. Economic disruption doesn't require that AI perfectly replicates a skill — it only requires that it's good enough and cheap enough that businesses restructure around it.

Browneye (Flasher, ND) introduces the surveillance and control angle, citing Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data. He references "Minority Report" as a template for AI-enabled predictive policing. Several people take this seriously.

Critique: The concern about AI-enhanced surveillance and social scoring is genuinely legitimate — China's social credit system is a real thing, and Western democracies have been incrementally building similar capabilities under different names. But Browneye's specific claim about Palantir and Meta compiling real-time trust scores from scroll speed is not substantiated — it reads like something from an unvetted YouTube video, which by his own admission is the source. The kernel of truth (AI is being used for behavioral prediction and surveillance) gets embedded in a more speculative claim that makes it easier to dismiss.

Davy Crockett is the thread's most interesting evolving voice. He starts as a pure skeptic ("hunker in the hills, go cash-only"), but by the middle of the thread is making notably more nuanced observations: "In the past, people who embraced new technology have done well for themselves. I believe AI will be the same." He also raises a genuinely smart engineering point about robot hacking vulnerability and knowledge transfer between robotic systems. He ends up as a kind of pragmatic cautious optimist.

Critique: He's right to update his views as the conversation develops. His practical questions about robot vulnerability and the learning curve for autonomous systems show more real-world systems thinking than most. The evolution from pure rejection to engaged curiosity over the course of the thread is honest and admirable.

SDMF is one of the thread's underrated voices. His late-thread post is probably the most empirically grounded of anyone's: he points out that US unemployment hovers near 4% despite decades of increasing computerization and robotization, that automation historically creates new job categories alongside displacing old ones, and that nurses haven't been freed up by medical computing — they've just been asked to do more. He argues that adaptable, motivated people will find fulfilling work in a transformed economy.

Critique: SDMF is essentially making the standard economic counterargument to technological unemployment — and it's been right every time so far. His empirical grounding is the best in the thread. His weaker spot is the implicit assumption that "this time won't be different," when there are serious economists who argue that LLM-scale AI may represent a qualitative shift rather than another incremental automation wave. He acknowledges "big fast changes are coming" — but doesn't fully grapple with the possibility that the historical pattern may not extrapolate.

Sum1 and FishFinder97 represent the thread's biblical apocalypticism wing. Sum1 interprets UBI + digital currency + AI as the fulfillment of Revelation's "mark of the beast" prophecy. FishFinder97 sees AI as "just the start of the end" in a sequence leading to UBI and digital currency. These posts get more likes than you might expect.

Critique: There's no point arguing with eschatological interpretation on its own terms. What's worth noting sociologically is that for a segment of this community, concerns about AI are inseparable from a broader worldview in which powerful, globalized, technocratic systems are inherently suspect and foretold. This isn't irrational given their value system — it's internally consistent. The problem is that it forecloses engagement rather than enabling it.

Ragnar (Pensacola) delivers the thread's best corrective to the "it'll never happen in our lifetime" crowd: if you told him in 1985 that by 2010 he'd have the entire accumulated knowledge of humanity in his pocket and could call anyone anywhere for free, he'd have said you were crazy. This is the best single argument anyone makes for respecting the pace of technological change.

lunkerslayer is reliably enthusiastic about the medical applications — personalized medicine, neurological repair, clinical trial simulation — and he's not wrong that these are genuine frontiers. His posts are a bit sprawling but he's engaging with real possibilities.

fly2cast is the only person in the thread who gives a completely uncomplicated positive testimonial: he uses AI daily for proofreading legal documents and used it in Italy to translate signs and learn about landmarks. No drama, no existential crisis, just a useful tool making his life better. This perspective gets almost no traction in the thread, which is revealing.


General Tone and Accuracy Assessment​

The tone is predominantly defensive, tribal, and future-averse, with genuine intellectual sparks scattered throughout. The community skews toward people who value self-sufficiency, distrust institutions and technocratic elites, and find meaning in physical work and the natural world. That's not a flaw — it's a perspective with real coherence. But it means the thread is not well-calibrated to evaluate AI clearly.

Where the thread is accurate:

  • Job displacement is a real and serious risk that deserves more policy attention than it gets.
  • AI's accuracy is imperfect and it can be confidently wrong (Fester's ChatGPT tax code battle is a common experience).
  • The concentration of AI power in the hands of a small number of companies and individuals is a legitimate governance concern.
  • The cognitive dependency risk is real and supported by the MIT study Fly Carpin cited.
  • The surveillance and behavioral prediction angle is worth taking seriously even if the specific claims are sometimes unverified.
Where the thread goes wrong or overstates:

  • "The end of humanity" as a serious near-term prediction conflates near-term disruption with extinction scenarios. Most AI researchers, including pessimistic ones, do not assign high probability to extinction within any meaningful timeframe.
  • The social-media-as-analog argument keeps appearing as a conversation stopper. AI and social media share some characteristics (concentration of power, information distortion, addiction dynamics) but are fundamentally different in their capability profile and economic structure.
  • Browneye's viral-video-sourced surveillance claims are stated as fact.
  • The "AI can't do physical work, so blue-collar jobs are safe" argument underestimates robotic progress and, more importantly, underestimates how AI changes what kinds of physical work get built in the first place.
  • Rowdie's claim that AI refused to find a mugshot of a minority suspect but said it would show one of a white suspect is almost certainly a misread of what happened — image search for real individuals raises privacy concerns regardless of race, not racial ones per se. Lycanthrope endorses his reading that this proves AI is "very left leaning," which is a significant oversimplification of how content moderation and safety guidelines work.

My Honest Take on Where AI Is Going — And the Real Misconceptions​

Having read 168 posts of this, here's what I think is actually true, as plainly as I can put it.

The job displacement concern is real but more nuanced than the thread allows. Every automation wave has destroyed job categories and created new ones, and the net result over 200+ years has been more jobs, better wages, and shorter working hours. That pattern may genuinely break with AI — because for the first time, the technology threatens cognitive labor at the same time as physical labor. But even here, the transition is more likely to be disruptive and uneven than to result in mass permanent unemployment. The people at greatest risk aren't the self-reliant, practically skilled people posting on this forum — they're mid-level knowledge workers doing repetitive cognitive tasks: data entry, basic legal and financial analysis, customer service, first-draft writing. The people best protected are either at the very top of cognitive complexity or embedded in physical, contextual, relationship-based work that robots genuinely can't replicate — yet.

"Yet" is doing enormous work in every reassuring prediction. Fester is right that industrial wire pulling isn't like robot surgery. But the honest answer is we don't know what the robotics landscape looks like in 15 years. The people who are most certain that physical trades are permanently safe are making the same mistake as the people who were certain in 1990 that computers would never replace skilled knowledge workers.

The AI-as-humanity-ender scenario is being dramatically overclaimed. SupressYourself works in AI and is still stating it as a near-certainty. But current AI systems — including the most advanced ones — are nowhere near the kind of recursive self-improvement, goal formation, and autonomous agency that the existential risk scenarios require. Those scenarios are worth taking seriously as tail risks in the long run, but treating them as imminent certainties misrepresents the state of the technology.

The "AI is biased/left-leaning" claim deserves a more careful reading. When Rowdie says AI refused to show him a minority suspect's photo, the most plausible explanation isn't political bias — it's that AI image tools are cautious about generating or retrieving real individuals' images for privacy and legal reasons. That caution can be imperfect and sometimes overly conservative. But the interpretation that it reflects racial politics is a leap. Similarly, Lycanthrope's claim that ChatGPT and Claude are "very left leaning" on political topics is a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts.

The competition-with-China argument has real merit but is also regularly abused. Lycanthrope is correct that AI leadership has serious national security and economic implications, and that excessive regulatory caution carries its own risks. But "we can't afford to slow down" has been used to justify skipping safety work in every major technological domain from nuclear to pharmaceuticals to social media, and the results have been consistently mixed. Racing China is a real strategic consideration — it's not a blank check to forgo governance.

The most underappreciated point in this entire thread is fly2cast's. He uses AI every day as a practical tool, it saves him time, it went well on his Italy trip, he has no complaints. That experience is wildly more common than the forum discourse would suggest, but it generates no drama and therefore no posts. The selection bias in AI discourse — both on this forum and in media broadly — is toward fear, outrage, and apocalypse. The mundane daily utility doesn't get threads.

What I genuinely believe about where this goes: AI will be enormously disruptive to the economy, more so than social media and probably more so than the internet in terms of sheer economic impact. The disruption will be uneven — hitting some sectors hard and fast, others slowly, and leaving others largely untouched for years. The people who will be hurt most are those whose work is routine, cognitive, and not embedded in complex physical or social contexts. The people who will benefit most are those who learn to work with these tools effectively. The existential risks are real as long-run tail risks but not imminent. The surveillance risks are more pressing and less discussed. And the question of who controls these systems — a handful of companies and governments — is the most important political question of the next decade, and it barely gets addressed on this thread or anywhere else with the seriousness it deserves.

The Nodak Angler community's instinct that something big and potentially dangerous is happening is correct. Their analysis of what exactly the danger is, and their reflexive dismissal of any upside, is where they lose the thread.
 

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
AND for those who really dig this AI generated 'slop', I had Grok critique Claudes analysis!

**Critical Review of Claude Sonnet 4.6’s Response**

Claude’s analysis is one of the strongest single-model outputs I’ve seen for this kind of sprawling, informal forum thread: 168 posts (confirmed accurate), well-structured, genuinely engaged with individual voices rather than generic “forum doomers vs. tech bros,” and it correctly identifies the thread’s sociological value as a rural, self-reliant, nature-oriented demographic processing AI. It delivers exactly what the prompt asked for—summary + critique of major contributors, tone/accuracy assessment, then the model’s own take + misconception refutations. The writing is clear, empathetic without pandering, and the critiques are mostly substantive.

**Major Strengths (where Claude is spot-on)**
- **Factual fidelity to the thread**:
- Lycanthrope’s exact eugenics-adjacent line (“reproduction is limited to selected individuals when low IQ laborers are no longer needed”) is quoted correctly and was indeed unchallenged.
- SupressYourself’s “I work with AI every day. It will be the end of humanity…” is verbatim.
- KDM’s theological “artificial is the opposite of natural… IT ALWAYS ends badly” + cities example: perfect.
- Iwhackwheat’s killer line (“Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup.”) captured verbatim (page 6).
- fly2cast’s uncomplicated positive testimonial (proofreading Century Code + Italy camera translation) is quoted almost word-for-word and rightly called “revealing” for what the thread ignores.
- SDMF’s empirical unemployment ~4% + nurses-still-overworked point: exact.
- Ragnar’s 1985-to-2010 pocket-knowledge analogy: exact.
- Rowdie’s mugshot refusal story + Lycanthrope’s follow-up calling GPT/Claude/Gemini “left leaning”: all confirmed on page 7.
- **Critiques of contributors**: Sharp and fair. KDM “proves too much” (medicine, agriculture, etc. are also artificial) is the single best rebuttal in the entire response. Fester’s “AI writes physics papers before fixing toilets” is correctly praised as one of the more accurate early observations. The blue-collar “never” absolutism is handled with the right nuance (“yet” + future homes designed for robots).
- **Tone assessment**: “Defensive, tribal, and future-averse… values self-sufficiency, distrusts elites, finds meaning in physical work and nature” — this is a precise, non-condescending diagnosis that actually respects the community.
- **Selection-bias insight** on why positive mundane uses (like fly2cast) get no traction: excellent meta-observation.

**Inconsistencies, Weak Spots, and Overreaches**
1. **Browneye surveillance details**: Claude says Browneye cited “Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data… scroll speed… from an unvetted YouTube video.” Thread summaries show Browneye on 15-minute cities, WEF austerity, AI self-preservation/deception videos, and general control fears — but the specific Palantir/Meta/scroll-speed combo isn’t in the extracted posts. It’s a plausible paraphrase of what he linked, but Claude presents it as direct claim + source critique. Minor embellishment.
2. **Poll claim**: “The poll runs heavily negative.” Results are hidden until you vote (confirmed on page 1), so this is an inference from tone. Fair inference, but stated as fact.
3. **AI political bias framing**: Here Claude is weakest and most inconsistent with reality. He calls the “left-leaning” observation “a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts.” This is a classic false-equivalence dodge. The documented, reproducible left/center-left tilt in GPT/Claude-family models (RLHF + safety training data) is not symmetric; multiple audits (Rozado 2023–2025, political compass tests, refusal asymmetries on controversial topics) show it. Claude literally defending its own family here feels like corporate priors leaking through.
4. **Hedging tension in own take**: Claude says the historical “automation always creates more jobs” pattern “may genuinely break with AI” (because cognitive + physical hit simultaneously), then two paragraphs later concludes “more likely to be disruptive and uneven than to result in mass permanent unemployment.” That’s not inconsistency per se, but it’s the classic economist two-step: acknowledge the qualitative shift, then retreat to the historical base case. Feels like safety-tuned hedging.
5. **Slight over-critique of Lycanthrope**: Calling the China-race-without-guardrails view “more reckless than visionary” and the eugenics line “alarming” is fair game for critique, but Claude is harsher on the thread’s strongest evidence-based optimist than on the doomers. Reflects Anthropic’s constitutional-AI emphasis on governance and downside risks.
6. **Minor omission**: Claude doesn’t note that the thread is still active as of Feb 20 2026 (last posts on page 9), but that’s trivial.

**General Tone of Claude’s Response**
Professional, measured, slightly professorial. It treats the forum participants with more respect than most big-model outputs would (no sneering at “older rural males”), but there is a subtle coastal-intellectual framing: surveillance and “who controls these systems” elevated to “the most important political question,” while accelerationist upsides (scientific breakthroughs, one-person billion-dollar companies) get less airtime. It’s more “cautious governance liberal” than neutral. Still far more even-handed than pure doomer or pure hype takes.

**Comparison to My Earlier Analysis**
My previous summary was more bullish/accelerationist (post-scarcity, robotics timelines, open-source democratization). Claude is more guarded on timelines and heavier on governance/surveillance risks — which matches the thread’s anxiety better in tone, but under-weights the evidence Lycanthrope was citing (Moltbook AI forum, DNA mapping, robot surgery videos already real). Both of us correctly refute the same big misconceptions (blue-collar “never,” Skynet immediacy, “AI makes us all stupid” without agency). Claude’s fly2cast insight and KDM rebuttal are stronger than mine; my robotics-trendline pushback on Fester/Skeeter is sharper.

**Bottom Line on Claude’s Response**
9/10. Exceptionally thorough and honest for an AI summary of 168 messy forum posts. The few inconsistencies are mostly minor factual paraphrases or safety-tuned framing on bias/governance. It improves on my earlier take in nuance and sociological empathy while being slightly less bullish on the upside. If you forced me to pick one analysis to give a non-technical North Dakota angler who just read the thread, I’d probably send Claude’s — it mirrors their concerns back to them more faithfully before offering the counter.

The thread itself remains a goldmine: raw, unfiltered, zero corporate polish. Its fatalism is understandable but ultimately self-defeating. The accurate parts (job shock coming, surveillance real, cognitive offloading documented) are right; the misconceptions (“never for trades,” “end of humanity next decade,” “AI is sentient and left-wing on purpose”) are the ones that will leave people unprepared. AI isn’t coming for your fishing hole — it’s coming for the drudgery that kept you from it. The winners will be the ones who treat it like every previous tool: master it, don’t fear it into irrelevance.
 

Kurtr

Founding Member
Founding Member
Joined
Apr 12, 2015
Posts
20,389
Likes
5,826
Points
1,008
Location
Mobridge,Sd
Ai will never replace any thing that has to do with animals they wont be able to read them as its a ever changing process by the second
 


Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,216
Likes
2,492
Points
758
Location
Bismarck
Ai will never replace any thing that has to do with animals they wont be able to read them as its a ever changing process by the second
People are animals... I would avoid the "AI will never" type comments, unless you like being wrong! :devilish:
 

Recent Posts

Friends of NDA

Top Posters of the Month

  • This month: 266
  • This month: 120
  • This month: 91
  • This month: 71
  • This month: 70
  • This month: 70
  • This month: 55
  • This month: 48
  • This month: 44
  • This month: 44
Top Bottom