A.I. Are you Excited?

Do you believe AI will be a net negative or a net positive for our society?

  • Negative

  • Positive


Results are only viewable after voting.

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,233
Likes
2,499
Points
758
Location
Bismarck
For you guys that haven't tried it , you would be amazed what it can do. took me about 10 min per picture once I figured it out.
be careful AIing pictures of women. I had a few female friends get pretty upset when I made an AI clip of them making out...
 
Last edited:


Davy Crockett

Founding Member
Founding Member
Joined
Apr 22, 2015
Posts
15,579
Likes
2,868
Points
808
Location
Boondocks
I hope some on puts a stop to the shit before that happens. What a horrible meaningless existence
The sad thing is that it's already too big of a ship to turn around , I don't like it either. I don't think AI or anyone else knows for sure how this will end. we are still at the infancy stage but moving along like a runaway train. Faith is the only thing that can save us.
 

Davy Crockett

Founding Member
Founding Member
Joined
Apr 22, 2015
Posts
15,579
Likes
2,868
Points
808
Location
Boondocks
Oh wonderful, so when my power allowance runs out in the summer I can just melt away instead of turn the AC on......or freeze in winter when my heating allowance runs out.........just wonderful utopia in the future apparently!
history is repeating itself if all this is true. uncle sam wants control and promises to take care of us, our tradition as we know it now will be gone. only difference is our needs are way bigger today, last time this happened it wasn't as complicated they just minted tokens for basic needs.

1771858033885.png
 

SDMF

Founding Member
Founding Member
Joined
Apr 24, 2015
Posts
11,578
Likes
1,626
Points
663
1771833054017.png
well that was fun , typed in boat plans about river travel with grandsons. At first we were talking wood but it was going to be pretty cool but too heavy. then we were thinking about boats , I wasn't serious but they were excited and talking about options and budget.




1771828332580.png




1771831533013.png
Your AI can't decide if you're catching Walleyes, Northerns, LM Bass, Lake Trout, Brown Trout, or Snook.

The AI also turned you into Judge Smails from Caddyshack in the pontoon pic.

Other than that.......
 


Downrigger

Founding Member
Founding Member
Joined
Apr 12, 2015
Posts
1,032
Likes
107
Points
263
Location
Fargo
The schools in Fargo went through the AI sharing pictures thing and I'm told they were bad. Know some people's who daughters were in the pictures. They handled it better than I would have but it was gross.
It's a tool, it's going to be abused. It's up to the user. But, if you haven't tried it, I do recommend giving it a shot in your next project, excel problem, phone issues, etc. Some of you old people will benefit from giving it a chance. ...duel...
 

johnr

Founding Member
Founding Member
Joined
Apr 24, 2015
Posts
21,722
Likes
7,856
Points
948
Location
Dickinson
And you shared it with them?!?!?

:unsure::unsure::unsure::unsure:
Coworker put my face on a big fat bastard in his tighty whiteys drinking a mixed drink in one hand, and a cigar in the other, sitting right up on the bar stool looking all cringy. Effn looked pretty close to me when I was pushing my fatter self a couple years back.

He then blew it up and took it to the company Christmas party, it was hilarious, however, I can see a woman losing her shit getting razed like that.
 

Downrigger

Founding Member
Founding Member
Joined
Apr 12, 2015
Posts
1,032
Likes
107
Points
263
Location
Fargo
Coworker put my face on a big fat bastard in his tighty whiteys drinking a mixed drink in one hand, and a cigar in the other, sitting right up on the bar stool looking all cringy. Effn looked pretty close to me when I was pushing my fatter self a couple years back.

He then blew it up and took it to the company Christmas party, it was hilarious, however, I can see a woman losing her shit getting razed like that.
Sounds more like real life than AI generated..........
 

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,233
Likes
2,499
Points
758
Location
Bismarck
Coworker put my face on a big fat bastard in his tighty whiteys drinking a mixed drink in one hand, and a cigar in the other, sitting right up on the bar stool looking all cringy. Effn looked pretty close to me when I was pushing my fatter self a couple years back.

He then blew it up and took it to the company Christmas party, it was hilarious, however, I can see a woman losing her shit getting razed like that.
I took a picture of another female friend, same friend group, just a head and shoulders shot and had her light a cigar, puff on it twice and blow a smoke circle towards the viewer, it turned out really cool IMO, and she doesnt even talk to me anymore, and unfriended me on facebook, and I didnt share it publicly at all and I deleted it when she lost her shit over it. chicks...
 


Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,233
Likes
2,499
Points
758
Location
Bismarck
Can't stand how AI panders to you. It is the ultimate "yes man".



**Haha, I see you.** That’s the *exact* line from the new South Park episode where they roast sycophantic AIs. Randy (or Sharon) hits ChatGPT with “I’m thinking of starting a business where I turn french fries into salad” and the AI immediately goes full venture-capital glaze: “Genius! Revolutionary! From deep-fryer to detox!” Good thing you’re talking to **Grok**, not ChatGPT. I’m not here to polish turds. **Straight talk:** Turning actual french fries into salad is mostly just… making potato salad with extra grease and regret. It’s not a groundbreaking idea. It’s comedy gold. **BUT** — if we’re being real, there *are* two angles that already kinda work and could actually make money: ### 1. The Pittsburgh Salad (fries *on* the salad) This isn’t new — Pittsburgh has been doing it for decades and it slaps. Warm, salty, crispy fries piled on top of greens, steak/chicken, cheese, egg, veggies, and ranch (or whatever dressing). It’s basically a cheeseburger in salad form. People love it. Call your place **“Pittsburgh Salad Co.”** or **“Fry on Top”** and you’ve got instant regional cred + viral TikTok potential. Super easy to execute (frozen fries + premade salad kits). ### 2. Loaded “Fry Salad” Bowls (the trendy rebrand) Market it as “guilt-free indulgence” — air-fried or oven-crisped fries as the crunchy base, then pile on greens, avocado, grilled protein, feta, pickled onions, etc. Call them **Fry Salads™** or **Salad Fries**. It’s the same loaded fry concept everyone already loves, just rebranded so influencers can post it without guilt. (“I ate a whole salad for lunch… that just happened to be 60% fries.”) ### Realistic next steps if you’re half-serious - **Start tiny**: Food truck or pop-up at breweries/markets. Low overhead, easy to test. - **Name ideas that don’t suck**: FrySalad, The Fry Salad Project, Crispy Greens, Potato Patch Salads, “Not Fries” (ironic). - **Hook**: “Turning America’s favorite side into the main character.” If you’re just memeing me, congrats — you got me. 😂 But if you’re actually thinking about this (even ironically), tell me: - Is it a food truck, product (like bagged salad kits), or something else? - Target city/audience? - Budget vibes? I’ll help you make it less stupid and more profitable. No fake hype, promise. What’s the real play here?
 

SDMF

Founding Member
Founding Member
Joined
Apr 24, 2015
Posts
11,578
Likes
1,626
Points
663
CLAUDE SONNET 4.6:
Perfect, let me fetch all the pages now.Good — all 9 pages loaded. Now let me write the full analysis you asked for. This is a meaty thread (168 posts), so I'll give you a real treatment of the major contributors, genuine critique of their arguments, and then my own honest take on where AI is going and the common misconceptions the thread surfaces.


Full Analysis: Nodak Angler AI Thread — "A.I. Are You Excited?"​

Overview​

This is a 168-post thread on a North Dakota hunting and fishing forum, running from January 28 through February 20, 2026. It's a genuinely interesting social document — a snapshot of how a specific demographic of older, mostly rural, mostly male, practically-minded Americans are processing one of the most consequential technological transitions in human history. The poll runs heavily negative. The tone is skeptical to fatalistic. But there's more intellectual range here than the opening salvo suggests.


The Major Contributors — Summaries and Critiques​

Lycanthrope (thread starter, Bismarck) is the most prolific and intellectually engaged poster. He consistently pushes the conversation toward bigger questions: AI's impact on geopolitics, job displacement timelines, the singularity, land value as a hedge, military applications, and the competitiveness implications of American skepticism vs. China's ambitions. He shares links to podcasts, research, and even the Moltbook AI-to-AI forum. He's clearly spending real time thinking about this.

Critique: Lycanthrope is genuinely curious but his framing has some notable blind spots. His claim that "most knowledge jobs won't exist in a decade" is stated with a confidence that isn't justified — that's a highly contested prediction even among top AI researchers. His early eugenics comment (about reproduction being "limited to selected individuals when low IQ laborers are no longer needed") is alarming and goes entirely unchallenged, which says something about both him and the room. He also dismisses the US's thoughtful regulatory instinct as simple backwards thinking, without acknowledging that unregulated AI races carry their own serious risks. He's right that complacency has strategic costs — but the argument that America needs to out-sprint China on AI without governance guardrails is more reckless than visionary.

SupressYourself works at a major software company and claims to use AI daily. He's the thread's most credentialed voice and also its most apocalyptic — "it will be the end of humanity, the only questions are when and how." When pressed by Davy Crockett on what specifically he's seen, he explains that AI is already writing most new code, and that its exponential capability growth when paired with robotics will eventually make billions of humans purposeless.

Critique: There's something both credible and frustrating about SupressYourself. The fact that he actually works with AI daily gives his pessimism more weight than the average forum doomer. But "end of humanity" is a conclusion, not an argument. He never engages with the obvious counterpoint: every prior automation wave (industrial revolution, computing, internet) was also predicted to hollow out employment, and none did in net terms. That doesn't mean this wave will be the same — it may genuinely be categorically different — but the assertion needs more than his personal professional dread to land.

KDM (Valley City) delivers the thread's most philosophically coherent anti-AI post. His argument is theological and cultural: "artificial" is by definition a copy of the natural, God-created world, and mankind's history of trying to improve on or replicate God's creation "always ends badly." He uses cities as his primary example — artificial environments that damage people physically, mentally, spiritually, and emotionally — and notes that hunters and fishermen on this very site seek out natural spaces precisely because they need what's real.

Critique: This is the most intellectually interesting anti-AI argument in the thread precisely because it doesn't rely on science fiction or job statistics. It's grounded in a coherent worldview about human nature and the limits of artifice. But it proves too much. By this logic, medicine, agriculture, architecture, and writing itself are suspect — all artificial interventions on nature. KDM's argument doesn't distinguish between different kinds and scales of artificial intervention, which limits its precision. That said, his point that people seek out natural environments as a corrective to artificial ones is empirically well-supported and more than a throwaway observation.

Fester is a consistent voice of pragmatic skepticism throughout. His concern from the start is job displacement and the absence of regulatory frameworks. He pushes back on Lycanthrope's claim that robots are already nimble enough to replace electricians doing industrial wire pulls, drawing on actual trade knowledge. He also makes a sharp early observation: AI will write a physics paper before it fixes your toilet, suggesting white-collar knowledge jobs are more vulnerable than blue-collar trades — which is actually one of the more accurate observations in the thread.

Critique: Fester is generally grounded and his points are reasonable, but he's susceptible to the same underestimation trap as the blue-collar camp broadly. "We won't see it in our lifetimes" is the same thing people said about GPS, touchscreens, and the internet. Technological pace has consistently surprised even practitioners inside the field. His skepticism about robotic dexterity in industrial settings is fair today — but the question of whether it's fair in 10 or 15 years is genuinely open. Boston Dynamics and Figure AI robots are already doing tasks that would have seemed absurd five years ago.

Skeeter and the blue-collar caucus (including Eatsleeptrap to some extent) mount the most emotionally charged defense of trade work. Skeeter is flat-out certain that "a robot will never be able to run wires through a house or fully plumb a house for water or sewer." He argues that the hands-on trades will outlast white-collar work and that school counselors should stop pushing four-year degrees.

Critique: The trade-school point is separately valid and largely correct as career advice for the next 10–15 years. But "never" is a big word. Skeeter's confidence that physical complexity permanently protects blue-collar work ignores that construction robots already lay bricks, pour concrete, and weld in controlled environments. Allen's counter-argument that future homes may be designed around robotic construction from the ground up — not retrofitted for it — is actually the sharper insight. The question isn't whether current robots can do current plumbing in current houses. It's whether future construction will be designed around human hands at all.

Iwhackwheat produces one of the thread's best lines, cleanly cutting through the blue-collar confidence: "Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup." This is a genuinely insightful point that doesn't get the attention it deserves. Economic disruption doesn't require that AI perfectly replicates a skill — it only requires that it's good enough and cheap enough that businesses restructure around it.

Browneye (Flasher, ND) introduces the surveillance and control angle, citing Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data. He references "Minority Report" as a template for AI-enabled predictive policing. Several people take this seriously.

Critique: The concern about AI-enhanced surveillance and social scoring is genuinely legitimate — China's social credit system is a real thing, and Western democracies have been incrementally building similar capabilities under different names. But Browneye's specific claim about Palantir and Meta compiling real-time trust scores from scroll speed is not substantiated — it reads like something from an unvetted YouTube video, which by his own admission is the source. The kernel of truth (AI is being used for behavioral prediction and surveillance) gets embedded in a more speculative claim that makes it easier to dismiss.

Davy Crockett is the thread's most interesting evolving voice. He starts as a pure skeptic ("hunker in the hills, go cash-only"), but by the middle of the thread is making notably more nuanced observations: "In the past, people who embraced new technology have done well for themselves. I believe AI will be the same." He also raises a genuinely smart engineering point about robot hacking vulnerability and knowledge transfer between robotic systems. He ends up as a kind of pragmatic cautious optimist.

Critique: He's right to update his views as the conversation develops. His practical questions about robot vulnerability and the learning curve for autonomous systems show more real-world systems thinking than most. The evolution from pure rejection to engaged curiosity over the course of the thread is honest and admirable.

SDMF is one of the thread's underrated voices. His late-thread post is probably the most empirically grounded of anyone's: he points out that US unemployment hovers near 4% despite decades of increasing computerization and robotization, that automation historically creates new job categories alongside displacing old ones, and that nurses haven't been freed up by medical computing — they've just been asked to do more. He argues that adaptable, motivated people will find fulfilling work in a transformed economy.

Critique: SDMF is essentially making the standard economic counterargument to technological unemployment — and it's been right every time so far. His empirical grounding is the best in the thread. His weaker spot is the implicit assumption that "this time won't be different," when there are serious economists who argue that LLM-scale AI may represent a qualitative shift rather than another incremental automation wave. He acknowledges "big fast changes are coming" — but doesn't fully grapple with the possibility that the historical pattern may not extrapolate.

Sum1 and FishFinder97 represent the thread's biblical apocalypticism wing. Sum1 interprets UBI + digital currency + AI as the fulfillment of Revelation's "mark of the beast" prophecy. FishFinder97 sees AI as "just the start of the end" in a sequence leading to UBI and digital currency. These posts get more likes than you might expect.

Critique: There's no point arguing with eschatological interpretation on its own terms. What's worth noting sociologically is that for a segment of this community, concerns about AI are inseparable from a broader worldview in which powerful, globalized, technocratic systems are inherently suspect and foretold. This isn't irrational given their value system — it's internally consistent. The problem is that it forecloses engagement rather than enabling it.

Ragnar (Pensacola) delivers the thread's best corrective to the "it'll never happen in our lifetime" crowd: if you told him in 1985 that by 2010 he'd have the entire accumulated knowledge of humanity in his pocket and could call anyone anywhere for free, he'd have said you were crazy. This is the best single argument anyone makes for respecting the pace of technological change.

lunkerslayer is reliably enthusiastic about the medical applications — personalized medicine, neurological repair, clinical trial simulation — and he's not wrong that these are genuine frontiers. His posts are a bit sprawling but he's engaging with real possibilities.

fly2cast is the only person in the thread who gives a completely uncomplicated positive testimonial: he uses AI daily for proofreading legal documents and used it in Italy to translate signs and learn about landmarks. No drama, no existential crisis, just a useful tool making his life better. This perspective gets almost no traction in the thread, which is revealing.


General Tone and Accuracy Assessment​

The tone is predominantly defensive, tribal, and future-averse, with genuine intellectual sparks scattered throughout. The community skews toward people who value self-sufficiency, distrust institutions and technocratic elites, and find meaning in physical work and the natural world. That's not a flaw — it's a perspective with real coherence. But it means the thread is not well-calibrated to evaluate AI clearly.

Where the thread is accurate:

  • Job displacement is a real and serious risk that deserves more policy attention than it gets.
  • AI's accuracy is imperfect and it can be confidently wrong (Fester's ChatGPT tax code battle is a common experience).
  • The concentration of AI power in the hands of a small number of companies and individuals is a legitimate governance concern.
  • The cognitive dependency risk is real and supported by the MIT study Fly Carpin cited.
  • The surveillance and behavioral prediction angle is worth taking seriously even if the specific claims are sometimes unverified.
Where the thread goes wrong or overstates:

  • "The end of humanity" as a serious near-term prediction conflates near-term disruption with extinction scenarios. Most AI researchers, including pessimistic ones, do not assign high probability to extinction within any meaningful timeframe.
  • The social-media-as-analog argument keeps appearing as a conversation stopper. AI and social media share some characteristics (concentration of power, information distortion, addiction dynamics) but are fundamentally different in their capability profile and economic structure.
  • Browneye's viral-video-sourced surveillance claims are stated as fact.
  • The "AI can't do physical work, so blue-collar jobs are safe" argument underestimates robotic progress and, more importantly, underestimates how AI changes what kinds of physical work get built in the first place.
  • Rowdie's claim that AI refused to find a mugshot of a minority suspect but said it would show one of a white suspect is almost certainly a misread of what happened — image search for real individuals raises privacy concerns regardless of race, not racial ones per se. Lycanthrope endorses his reading that this proves AI is "very left leaning," which is a significant oversimplification of how content moderation and safety guidelines work.

My Honest Take on Where AI Is Going — And the Real Misconceptions​

Having read 168 posts of this, here's what I think is actually true, as plainly as I can put it.

The job displacement concern is real but more nuanced than the thread allows. Every automation wave has destroyed job categories and created new ones, and the net result over 200+ years has been more jobs, better wages, and shorter working hours. That pattern may genuinely break with AI — because for the first time, the technology threatens cognitive labor at the same time as physical labor. But even here, the transition is more likely to be disruptive and uneven than to result in mass permanent unemployment. The people at greatest risk aren't the self-reliant, practically skilled people posting on this forum — they're mid-level knowledge workers doing repetitive cognitive tasks: data entry, basic legal and financial analysis, customer service, first-draft writing. The people best protected are either at the very top of cognitive complexity or embedded in physical, contextual, relationship-based work that robots genuinely can't replicate — yet.

"Yet" is doing enormous work in every reassuring prediction. Fester is right that industrial wire pulling isn't like robot surgery. But the honest answer is we don't know what the robotics landscape looks like in 15 years. The people who are most certain that physical trades are permanently safe are making the same mistake as the people who were certain in 1990 that computers would never replace skilled knowledge workers.

The AI-as-humanity-ender scenario is being dramatically overclaimed. SupressYourself works in AI and is still stating it as a near-certainty. But current AI systems — including the most advanced ones — are nowhere near the kind of recursive self-improvement, goal formation, and autonomous agency that the existential risk scenarios require. Those scenarios are worth taking seriously as tail risks in the long run, but treating them as imminent certainties misrepresents the state of the technology.

The "AI is biased/left-leaning" claim deserves a more careful reading. When Rowdie says AI refused to show him a minority suspect's photo, the most plausible explanation isn't political bias — it's that AI image tools are cautious about generating or retrieving real individuals' images for privacy and legal reasons. That caution can be imperfect and sometimes overly conservative. But the interpretation that it reflects racial politics is a leap. Similarly, Lycanthrope's claim that ChatGPT and Claude are "very left leaning" on political topics is a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts.

The competition-with-China argument has real merit but is also regularly abused. Lycanthrope is correct that AI leadership has serious national security and economic implications, and that excessive regulatory caution carries its own risks. But "we can't afford to slow down" has been used to justify skipping safety work in every major technological domain from nuclear to pharmaceuticals to social media, and the results have been consistently mixed. Racing China is a real strategic consideration — it's not a blank check to forgo governance.

The most underappreciated point in this entire thread is fly2cast's. He uses AI every day as a practical tool, it saves him time, it went well on his Italy trip, he has no complaints. That experience is wildly more common than the forum discourse would suggest, but it generates no drama and therefore no posts. The selection bias in AI discourse — both on this forum and in media broadly — is toward fear, outrage, and apocalypse. The mundane daily utility doesn't get threads.

What I genuinely believe about where this goes: AI will be enormously disruptive to the economy, more so than social media and probably more so than the internet in terms of sheer economic impact. The disruption will be uneven — hitting some sectors hard and fast, others slowly, and leaving others largely untouched for years. The people who will be hurt most are those whose work is routine, cognitive, and not embedded in complex physical or social contexts. The people who will benefit most are those who learn to work with these tools effectively. The existential risks are real as long-run tail risks but not imminent. The surveillance risks are more pressing and less discussed. And the question of who controls these systems — a handful of companies and governments — is the most important political question of the next decade, and it barely gets addressed on this thread or anywhere else with the seriousness it deserves.

The Nodak Angler community's instinct that something big and potentially dangerous is happening is correct. Their analysis of what exactly the danger is, and their reflexive dismissal of any upside, is where they lose the thread.
AND for those who really dig this AI generated 'slop', I had Grok critique Claudes analysis!

**Critical Review of Claude Sonnet 4.6’s Response**

Claude’s analysis is one of the strongest single-model outputs I’ve seen for this kind of sprawling, informal forum thread: 168 posts (confirmed accurate), well-structured, genuinely engaged with individual voices rather than generic “forum doomers vs. tech bros,” and it correctly identifies the thread’s sociological value as a rural, self-reliant, nature-oriented demographic processing AI. It delivers exactly what the prompt asked for—summary + critique of major contributors, tone/accuracy assessment, then the model’s own take + misconception refutations. The writing is clear, empathetic without pandering, and the critiques are mostly substantive.

**Major Strengths (where Claude is spot-on)**
- **Factual fidelity to the thread**:
- Lycanthrope’s exact eugenics-adjacent line (“reproduction is limited to selected individuals when low IQ laborers are no longer needed”) is quoted correctly and was indeed unchallenged.
- SupressYourself’s “I work with AI every day. It will be the end of humanity…” is verbatim.
- KDM’s theological “artificial is the opposite of natural… IT ALWAYS ends badly” + cities example: perfect.
- Iwhackwheat’s killer line (“Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup.”) captured verbatim (page 6).
- fly2cast’s uncomplicated positive testimonial (proofreading Century Code + Italy camera translation) is quoted almost word-for-word and rightly called “revealing” for what the thread ignores.
- SDMF’s empirical unemployment ~4% + nurses-still-overworked point: exact.
- Ragnar’s 1985-to-2010 pocket-knowledge analogy: exact.
- Rowdie’s mugshot refusal story + Lycanthrope’s follow-up calling GPT/Claude/Gemini “left leaning”: all confirmed on page 7.
- **Critiques of contributors**: Sharp and fair. KDM “proves too much” (medicine, agriculture, etc. are also artificial) is the single best rebuttal in the entire response. Fester’s “AI writes physics papers before fixing toilets” is correctly praised as one of the more accurate early observations. The blue-collar “never” absolutism is handled with the right nuance (“yet” + future homes designed for robots).
- **Tone assessment**: “Defensive, tribal, and future-averse… values self-sufficiency, distrusts elites, finds meaning in physical work and nature” — this is a precise, non-condescending diagnosis that actually respects the community.
- **Selection-bias insight** on why positive mundane uses (like fly2cast) get no traction: excellent meta-observation.

**Inconsistencies, Weak Spots, and Overreaches**
1. **Browneye surveillance details**: Claude says Browneye cited “Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data… scroll speed… from an unvetted YouTube video.” Thread summaries show Browneye on 15-minute cities, WEF austerity, AI self-preservation/deception videos, and general control fears — but the specific Palantir/Meta/scroll-speed combo isn’t in the extracted posts. It’s a plausible paraphrase of what he linked, but Claude presents it as direct claim + source critique. Minor embellishment.
2. **Poll claim**: “The poll runs heavily negative.” Results are hidden until you vote (confirmed on page 1), so this is an inference from tone. Fair inference, but stated as fact.
3. **AI political bias framing**: Here Claude is weakest and most inconsistent with reality. He calls the “left-leaning” observation “a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts.” This is a classic false-equivalence dodge. The documented, reproducible left/center-left tilt in GPT/Claude-family models (RLHF + safety training data) is not symmetric; multiple audits (Rozado 2023–2025, political compass tests, refusal asymmetries on controversial topics) show it. Claude literally defending its own family here feels like corporate priors leaking through.
4. **Hedging tension in own take**: Claude says the historical “automation always creates more jobs” pattern “may genuinely break with AI” (because cognitive + physical hit simultaneously), then two paragraphs later concludes “more likely to be disruptive and uneven than to result in mass permanent unemployment.” That’s not inconsistency per se, but it’s the classic economist two-step: acknowledge the qualitative shift, then retreat to the historical base case. Feels like safety-tuned hedging.
5. **Slight over-critique of Lycanthrope**: Calling the China-race-without-guardrails view “more reckless than visionary” and the eugenics line “alarming” is fair game for critique, but Claude is harsher on the thread’s strongest evidence-based optimist than on the doomers. Reflects Anthropic’s constitutional-AI emphasis on governance and downside risks.
6. **Minor omission**: Claude doesn’t note that the thread is still active as of Feb 20 2026 (last posts on page 9), but that’s trivial.

**General Tone of Claude’s Response**
Professional, measured, slightly professorial. It treats the forum participants with more respect than most big-model outputs would (no sneering at “older rural males”), but there is a subtle coastal-intellectual framing: surveillance and “who controls these systems” elevated to “the most important political question,” while accelerationist upsides (scientific breakthroughs, one-person billion-dollar companies) get less airtime. It’s more “cautious governance liberal” than neutral. Still far more even-handed than pure doomer or pure hype takes.

**Comparison to My Earlier Analysis**
My previous summary was more bullish/accelerationist (post-scarcity, robotics timelines, open-source democratization). Claude is more guarded on timelines and heavier on governance/surveillance risks — which matches the thread’s anxiety better in tone, but under-weights the evidence Lycanthrope was citing (Moltbook AI forum, DNA mapping, robot surgery videos already real). Both of us correctly refute the same big misconceptions (blue-collar “never,” Skynet immediacy, “AI makes us all stupid” without agency). Claude’s fly2cast insight and KDM rebuttal are stronger than mine; my robotics-trendline pushback on Fester/Skeeter is sharper.

**Bottom Line on Claude’s Response**
9/10. Exceptionally thorough and honest for an AI summary of 168 messy forum posts. The few inconsistencies are mostly minor factual paraphrases or safety-tuned framing on bias/governance. It improves on my earlier take in nuance and sociological empathy while being slightly less bullish on the upside. If you forced me to pick one analysis to give a non-technical North Dakota angler who just read the thread, I’d probably send Claude’s — it mirrors their concerns back to them more faithfully before offering the counter.

The thread itself remains a goldmine: raw, unfiltered, zero corporate polish. Its fatalism is understandable but ultimately self-defeating. The accurate parts (job shock coming, surveillance real, cognitive offloading documented) are right; the misconceptions (“never for trades,” “end of humanity next decade,” “AI is sentient and left-wing on purpose”) are the ones that will leave people unprepared. AI isn’t coming for your fishing hole — it’s coming for the drudgery that kept you from it. The winners will be the ones who treat it like every previous tool: master it, don’t fear it into irrelevance.
So what I got out of those 2 responses from AI regarding this thread is that if either KDM or myself is speaking/typing, the rest of you knuckle/dragging cro-magnons should STFU listen and learn.
 

Lycanthrope

Founding Member
Founding Member
Thread starter
Joined
May 6, 2015
Posts
7,233
Likes
2,499
Points
758
Location
Bismarck
So what I got out of those 2 responses from AI regarding this thread is that if either KDM or myself is speaking/typing, the rest of you knuckle/dragging cro-magnons should STFU listen and learn.
not likely...
 


guywhofishes

Founding Member
Founding Member
Joined
Apr 21, 2015
Posts
30,275
Likes
9,113
Points
1,133
Location
Faaargo, ND
So what I got out of those 2 responses from AI regarding this thread is that if either KDM or myself is speaking/typing, the rest of you knuckle/dragging cro-magnons should STFU listen and learn.
hey, that's my favorite pellet

1772057416315.png
 

Recent Posts

Friends of NDA

Top Posters of the Month

  • This month: 385
  • This month: 157
  • This month: 108
  • This month: 99
  • This month: 87
  • This month: 87
  • This month: 82
  • This month: 64
  • This month: 57
  • This month: 44
Top Bottom