What's new
Forums
Members
Resources
Whopper Club
Politics
Pics
Videos
Fishing Reports
Classifieds
Log in
Register
What's new
Search
Members
Resources
Whopper Club
Politics
Menu
Log in
Register
Install the app
Install
Forums
General
General Discussion
A.I. Are you Excited?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Lycanthrope" data-source="post: 481642" data-attributes="member: 562"><p>CLAUDE SONNET 4.6:</p><p>Perfect, let me fetch all the pages now.Good — all 9 pages loaded. Now let me write the full analysis you asked for. This is a meaty thread (168 posts), so I'll give you a real treatment of the major contributors, genuine critique of their arguments, and then my own honest take on where AI is going and the common misconceptions the thread surfaces.</p><p></p><hr /><h3>Full Analysis: Nodak Angler AI Thread — "A.I. Are You Excited?"</h3><h3>Overview</h3><p>This is a 168-post thread on a North Dakota hunting and fishing forum, running from January 28 through February 20, 2026. It's a genuinely interesting social document — a snapshot of how a specific demographic of older, mostly rural, mostly male, practically-minded Americans are processing one of the most consequential technological transitions in human history. The poll runs heavily negative. The tone is skeptical to fatalistic. But there's more intellectual range here than the opening salvo suggests.</p><p></p><hr /><h3>The Major Contributors — Summaries and Critiques</h3><p><strong>Lycanthrope</strong> (thread starter, Bismarck) is the most prolific and intellectually engaged poster. He consistently pushes the conversation toward bigger questions: AI's impact on geopolitics, job displacement timelines, the singularity, land value as a hedge, military applications, and the competitiveness implications of American skepticism vs. China's ambitions. He shares links to podcasts, research, and even the Moltbook AI-to-AI forum. He's clearly spending real time thinking about this.</p><p></p><p><em>Critique:</em> Lycanthrope is genuinely curious but his framing has some notable blind spots. His claim that "most knowledge jobs won't exist in a decade" is stated with a confidence that isn't justified — that's a highly contested prediction even among top AI researchers. His early eugenics comment (about reproduction being "limited to selected individuals when low IQ laborers are no longer needed") is alarming and goes entirely unchallenged, which says something about both him and the room. He also dismisses the US's thoughtful regulatory instinct as simple backwards thinking, without acknowledging that unregulated AI races carry their own serious risks. He's right that complacency has strategic costs — but the argument that America needs to out-sprint China on AI without governance guardrails is more reckless than visionary.</p><p></p><p><strong>SupressYourself</strong> works at a major software company and claims to use AI daily. He's the thread's most credentialed voice and also its most apocalyptic — "it will be the end of humanity, the only questions are when and how." When pressed by Davy Crockett on what specifically he's seen, he explains that AI is already writing most new code, and that its exponential capability growth when paired with robotics will eventually make billions of humans purposeless.</p><p></p><p><em>Critique:</em> There's something both credible and frustrating about SupressYourself. The fact that he actually works with AI daily gives his pessimism more weight than the average forum doomer. But "end of humanity" is a conclusion, not an argument. He never engages with the obvious counterpoint: every prior automation wave (industrial revolution, computing, internet) was also predicted to hollow out employment, and none did in net terms. That doesn't mean this wave will be the same — it may genuinely be categorically different — but the assertion needs more than his personal professional dread to land.</p><p></p><p><strong>KDM</strong> (Valley City) delivers the thread's most philosophically coherent anti-AI post. His argument is theological and cultural: "artificial" is by definition a copy of the natural, God-created world, and mankind's history of trying to improve on or replicate God's creation "always ends badly." He uses cities as his primary example — artificial environments that damage people physically, mentally, spiritually, and emotionally — and notes that hunters and fishermen on this very site seek out natural spaces precisely because they need what's real.</p><p></p><p><em>Critique:</em> This is the most intellectually interesting anti-AI argument in the thread precisely because it doesn't rely on science fiction or job statistics. It's grounded in a coherent worldview about human nature and the limits of artifice. But it proves too much. By this logic, medicine, agriculture, architecture, and writing itself are suspect — all artificial interventions on nature. KDM's argument doesn't distinguish between different kinds and scales of artificial intervention, which limits its precision. That said, his point that people seek out natural environments as a corrective to artificial ones is empirically well-supported and more than a throwaway observation.</p><p></p><p><strong>Fester</strong> is a consistent voice of pragmatic skepticism throughout. His concern from the start is job displacement and the absence of regulatory frameworks. He pushes back on Lycanthrope's claim that robots are already nimble enough to replace electricians doing industrial wire pulls, drawing on actual trade knowledge. He also makes a sharp early observation: AI will write a physics paper before it fixes your toilet, suggesting white-collar knowledge jobs are more vulnerable than blue-collar trades — which is actually one of the more accurate observations in the thread.</p><p></p><p><em>Critique:</em> Fester is generally grounded and his points are reasonable, but he's susceptible to the same underestimation trap as the blue-collar camp broadly. "We won't see it in our lifetimes" is the same thing people said about GPS, touchscreens, and the internet. Technological pace has consistently surprised even practitioners inside the field. His skepticism about robotic dexterity in industrial settings is fair today — but the question of whether it's fair in 10 or 15 years is genuinely open. Boston Dynamics and Figure AI robots are already doing tasks that would have seemed absurd five years ago.</p><p></p><p><strong>Skeeter and the blue-collar caucus</strong> (including Eatsleeptrap to some extent) mount the most emotionally charged defense of trade work. Skeeter is flat-out certain that "a robot will never be able to run wires through a house or fully plumb a house for water or sewer." He argues that the hands-on trades will outlast white-collar work and that school counselors should stop pushing four-year degrees.</p><p></p><p><em>Critique:</em> The trade-school point is separately valid and largely correct as career advice for the next 10–15 years. But "never" is a big word. Skeeter's confidence that physical complexity permanently protects blue-collar work ignores that construction robots already lay bricks, pour concrete, and weld in controlled environments. Allen's counter-argument that future homes may be designed around robotic construction from the ground up — not retrofitted for it — is actually the sharper insight. The question isn't whether current robots can do current plumbing in current houses. It's whether future construction will be designed around human hands at all.</p><p></p><p><strong>Iwhackwheat</strong> produces one of the thread's best lines, cleanly cutting through the blue-collar confidence: "Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup." This is a genuinely insightful point that doesn't get the attention it deserves. Economic disruption doesn't require that AI perfectly replicates a skill — it only requires that it's good enough and cheap enough that businesses restructure around it.</p><p></p><p><strong>Browneye</strong> (Flasher, ND) introduces the surveillance and control angle, citing Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data. He references "Minority Report" as a template for AI-enabled predictive policing. Several people take this seriously.</p><p></p><p><em>Critique:</em> The concern about AI-enhanced surveillance and social scoring is genuinely legitimate — China's social credit system is a real thing, and Western democracies have been incrementally building similar capabilities under different names. But Browneye's specific claim about Palantir and Meta compiling real-time trust scores from scroll speed is not substantiated — it reads like something from an unvetted YouTube video, which by his own admission is the source. The kernel of truth (AI is being used for behavioral prediction and surveillance) gets embedded in a more speculative claim that makes it easier to dismiss.</p><p></p><p><strong>Davy Crockett</strong> is the thread's most interesting evolving voice. He starts as a pure skeptic ("hunker in the hills, go cash-only"), but by the middle of the thread is making notably more nuanced observations: "In the past, people who embraced new technology have done well for themselves. I believe AI will be the same." He also raises a genuinely smart engineering point about robot hacking vulnerability and knowledge transfer between robotic systems. He ends up as a kind of pragmatic cautious optimist.</p><p></p><p><em>Critique:</em> He's right to update his views as the conversation develops. His practical questions about robot vulnerability and the learning curve for autonomous systems show more real-world systems thinking than most. The evolution from pure rejection to engaged curiosity over the course of the thread is honest and admirable.</p><p></p><p><strong>SDMF</strong> is one of the thread's underrated voices. His late-thread post is probably the most empirically grounded of anyone's: he points out that US unemployment hovers near 4% despite decades of increasing computerization and robotization, that automation historically creates new job categories alongside displacing old ones, and that nurses haven't been freed up by medical computing — they've just been asked to do more. He argues that adaptable, motivated people will find fulfilling work in a transformed economy.</p><p></p><p><em>Critique:</em> SDMF is essentially making the standard economic counterargument to technological unemployment — and it's been right every time so far. His empirical grounding is the best in the thread. His weaker spot is the implicit assumption that "this time won't be different," when there are serious economists who argue that LLM-scale AI may represent a qualitative shift rather than another incremental automation wave. He acknowledges "big fast changes are coming" — but doesn't fully grapple with the possibility that the historical pattern may not extrapolate.</p><p></p><p><strong>Sum1 and FishFinder97</strong> represent the thread's biblical apocalypticism wing. Sum1 interprets UBI + digital currency + AI as the fulfillment of Revelation's "mark of the beast" prophecy. FishFinder97 sees AI as "just the start of the end" in a sequence leading to UBI and digital currency. These posts get more likes than you might expect.</p><p></p><p><em>Critique:</em> There's no point arguing with eschatological interpretation on its own terms. What's worth noting sociologically is that for a segment of this community, concerns about AI are inseparable from a broader worldview in which powerful, globalized, technocratic systems are inherently suspect and foretold. This isn't irrational given their value system — it's internally consistent. The problem is that it forecloses engagement rather than enabling it.</p><p></p><p><strong>Ragnar</strong> (Pensacola) delivers the thread's best corrective to the "it'll never happen in our lifetime" crowd: if you told him in 1985 that by 2010 he'd have the entire accumulated knowledge of humanity in his pocket and could call anyone anywhere for free, he'd have said you were crazy. This is the best single argument anyone makes for respecting the pace of technological change.</p><p></p><p><strong>lunkerslayer</strong> is reliably enthusiastic about the medical applications — personalized medicine, neurological repair, clinical trial simulation — and he's not wrong that these are genuine frontiers. His posts are a bit sprawling but he's engaging with real possibilities.</p><p></p><p><strong>fly2cast</strong> is the only person in the thread who gives a completely uncomplicated positive testimonial: he uses AI daily for proofreading legal documents and used it in Italy to translate signs and learn about landmarks. No drama, no existential crisis, just a useful tool making his life better. This perspective gets almost no traction in the thread, which is revealing.</p><p></p><hr /><h3>General Tone and Accuracy Assessment</h3><p>The tone is predominantly defensive, tribal, and future-averse, with genuine intellectual sparks scattered throughout. The community skews toward people who value self-sufficiency, distrust institutions and technocratic elites, and find meaning in physical work and the natural world. That's not a flaw — it's a perspective with real coherence. But it means the thread is not well-calibrated to evaluate AI clearly.</p><p></p><p><strong>Where the thread is accurate:</strong></p><p></p><ul> <li data-xf-list-type="ul">Job displacement is a real and serious risk that deserves more policy attention than it gets.</li> <li data-xf-list-type="ul">AI's accuracy is imperfect and it can be confidently wrong (Fester's ChatGPT tax code battle is a common experience).</li> <li data-xf-list-type="ul">The concentration of AI power in the hands of a small number of companies and individuals is a legitimate governance concern.</li> <li data-xf-list-type="ul">The cognitive dependency risk is real and supported by the MIT study Fly Carpin cited.</li> <li data-xf-list-type="ul">The surveillance and behavioral prediction angle is worth taking seriously even if the specific claims are sometimes unverified.</li> </ul><p><strong>Where the thread goes wrong or overstates:</strong></p><p></p><ul> <li data-xf-list-type="ul">"The end of humanity" as a serious near-term prediction conflates near-term disruption with extinction scenarios. Most AI researchers, including pessimistic ones, do not assign high probability to extinction within any meaningful timeframe.</li> <li data-xf-list-type="ul">The social-media-as-analog argument keeps appearing as a conversation stopper. AI and social media share some characteristics (concentration of power, information distortion, addiction dynamics) but are fundamentally different in their capability profile and economic structure.</li> <li data-xf-list-type="ul">Browneye's viral-video-sourced surveillance claims are stated as fact.</li> <li data-xf-list-type="ul">The "AI can't do physical work, so blue-collar jobs are safe" argument underestimates robotic progress and, more importantly, underestimates how AI changes what kinds of physical work get built in the first place.</li> <li data-xf-list-type="ul">Rowdie's claim that AI refused to find a mugshot of a minority suspect but said it would show one of a white suspect is almost certainly a misread of what happened — image search for real individuals raises privacy concerns regardless of race, not racial ones per se. Lycanthrope endorses his reading that this proves AI is "very left leaning," which is a significant oversimplification of how content moderation and safety guidelines work.</li> </ul><hr /><h3>My Honest Take on Where AI Is Going — And the Real Misconceptions</h3><p>Having read 168 posts of this, here's what I think is actually true, as plainly as I can put it.</p><p></p><p><strong>The job displacement concern is real but more nuanced than the thread allows.</strong> Every automation wave has destroyed job categories and created new ones, and the net result over 200+ years has been more jobs, better wages, and shorter working hours. That pattern may genuinely break with AI — because for the first time, the technology threatens cognitive labor at the same time as physical labor. But even here, the transition is more likely to be disruptive and uneven than to result in mass permanent unemployment. The people at greatest risk aren't the self-reliant, practically skilled people posting on this forum — they're mid-level knowledge workers doing repetitive cognitive tasks: data entry, basic legal and financial analysis, customer service, first-draft writing. The people best protected are either at the very top of cognitive complexity or embedded in physical, contextual, relationship-based work that robots genuinely can't replicate — yet.</p><p></p><p><strong>"Yet" is doing enormous work in every reassuring prediction.</strong> Fester is right that industrial wire pulling isn't like robot surgery. But the honest answer is we don't know what the robotics landscape looks like in 15 years. The people who are most certain that physical trades are permanently safe are making the same mistake as the people who were certain in 1990 that computers would never replace skilled knowledge workers.</p><p></p><p><strong>The AI-as-humanity-ender scenario is being dramatically overclaimed.</strong> SupressYourself works in AI and is still stating it as a near-certainty. But current AI systems — including the most advanced ones — are nowhere near the kind of recursive self-improvement, goal formation, and autonomous agency that the existential risk scenarios require. Those scenarios are worth taking seriously as tail risks in the long run, but treating them as imminent certainties misrepresents the state of the technology.</p><p></p><p><strong>The "AI is biased/left-leaning" claim deserves a more careful reading.</strong> When Rowdie says AI refused to show him a minority suspect's photo, the most plausible explanation isn't political bias — it's that AI image tools are cautious about generating or retrieving real individuals' images for privacy and legal reasons. That caution can be imperfect and sometimes overly conservative. But the interpretation that it reflects racial politics is a leap. Similarly, Lycanthrope's claim that ChatGPT and Claude are "very left leaning" on political topics is a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts.</p><p></p><p><strong>The competition-with-China argument has real merit but is also regularly abused.</strong> Lycanthrope is correct that AI leadership has serious national security and economic implications, and that excessive regulatory caution carries its own risks. But "we can't afford to slow down" has been used to justify skipping safety work in every major technological domain from nuclear to pharmaceuticals to social media, and the results have been consistently mixed. Racing China is a real strategic consideration — it's not a blank check to forgo governance.</p><p></p><p><strong>The most underappreciated point in this entire thread is fly2cast's.</strong> He uses AI every day as a practical tool, it saves him time, it went well on his Italy trip, he has no complaints. That experience is wildly more common than the forum discourse would suggest, but it generates no drama and therefore no posts. The selection bias in AI discourse — both on this forum and in media broadly — is toward fear, outrage, and apocalypse. The mundane daily utility doesn't get threads.</p><p></p><p><strong>What I genuinely believe about where this goes:</strong> AI will be enormously disruptive to the economy, more so than social media and probably more so than the internet in terms of sheer economic impact. The disruption will be uneven — hitting some sectors hard and fast, others slowly, and leaving others largely untouched for years. The people who will be hurt most are those whose work is routine, cognitive, and not embedded in complex physical or social contexts. The people who will benefit most are those who learn to work with these tools effectively. The existential risks are real as long-run tail risks but not imminent. The surveillance risks are more pressing and less discussed. And the question of who controls these systems — a handful of companies and governments — is the most important political question of the next decade, and it barely gets addressed on this thread or anywhere else with the seriousness it deserves.</p><p></p><p>The Nodak Angler community's instinct that something big and potentially dangerous is happening is correct. Their analysis of what exactly the danger is, and their reflexive dismissal of any upside, is where they lose the thread.</p></blockquote><p></p>
[QUOTE="Lycanthrope, post: 481642, member: 562"] CLAUDE SONNET 4.6: Perfect, let me fetch all the pages now.Good — all 9 pages loaded. Now let me write the full analysis you asked for. This is a meaty thread (168 posts), so I'll give you a real treatment of the major contributors, genuine critique of their arguments, and then my own honest take on where AI is going and the common misconceptions the thread surfaces. [HR][/HR] [HEADING=2]Full Analysis: Nodak Angler AI Thread — "A.I. Are You Excited?"[/HEADING] [HEADING=2]Overview[/HEADING] This is a 168-post thread on a North Dakota hunting and fishing forum, running from January 28 through February 20, 2026. It's a genuinely interesting social document — a snapshot of how a specific demographic of older, mostly rural, mostly male, practically-minded Americans are processing one of the most consequential technological transitions in human history. The poll runs heavily negative. The tone is skeptical to fatalistic. But there's more intellectual range here than the opening salvo suggests. [HR][/HR] [HEADING=2]The Major Contributors — Summaries and Critiques[/HEADING] [B]Lycanthrope[/B] (thread starter, Bismarck) is the most prolific and intellectually engaged poster. He consistently pushes the conversation toward bigger questions: AI's impact on geopolitics, job displacement timelines, the singularity, land value as a hedge, military applications, and the competitiveness implications of American skepticism vs. China's ambitions. He shares links to podcasts, research, and even the Moltbook AI-to-AI forum. He's clearly spending real time thinking about this. [I]Critique:[/I] Lycanthrope is genuinely curious but his framing has some notable blind spots. His claim that "most knowledge jobs won't exist in a decade" is stated with a confidence that isn't justified — that's a highly contested prediction even among top AI researchers. His early eugenics comment (about reproduction being "limited to selected individuals when low IQ laborers are no longer needed") is alarming and goes entirely unchallenged, which says something about both him and the room. He also dismisses the US's thoughtful regulatory instinct as simple backwards thinking, without acknowledging that unregulated AI races carry their own serious risks. He's right that complacency has strategic costs — but the argument that America needs to out-sprint China on AI without governance guardrails is more reckless than visionary. [B]SupressYourself[/B] works at a major software company and claims to use AI daily. He's the thread's most credentialed voice and also its most apocalyptic — "it will be the end of humanity, the only questions are when and how." When pressed by Davy Crockett on what specifically he's seen, he explains that AI is already writing most new code, and that its exponential capability growth when paired with robotics will eventually make billions of humans purposeless. [I]Critique:[/I] There's something both credible and frustrating about SupressYourself. The fact that he actually works with AI daily gives his pessimism more weight than the average forum doomer. But "end of humanity" is a conclusion, not an argument. He never engages with the obvious counterpoint: every prior automation wave (industrial revolution, computing, internet) was also predicted to hollow out employment, and none did in net terms. That doesn't mean this wave will be the same — it may genuinely be categorically different — but the assertion needs more than his personal professional dread to land. [B]KDM[/B] (Valley City) delivers the thread's most philosophically coherent anti-AI post. His argument is theological and cultural: "artificial" is by definition a copy of the natural, God-created world, and mankind's history of trying to improve on or replicate God's creation "always ends badly." He uses cities as his primary example — artificial environments that damage people physically, mentally, spiritually, and emotionally — and notes that hunters and fishermen on this very site seek out natural spaces precisely because they need what's real. [I]Critique:[/I] This is the most intellectually interesting anti-AI argument in the thread precisely because it doesn't rely on science fiction or job statistics. It's grounded in a coherent worldview about human nature and the limits of artifice. But it proves too much. By this logic, medicine, agriculture, architecture, and writing itself are suspect — all artificial interventions on nature. KDM's argument doesn't distinguish between different kinds and scales of artificial intervention, which limits its precision. That said, his point that people seek out natural environments as a corrective to artificial ones is empirically well-supported and more than a throwaway observation. [B]Fester[/B] is a consistent voice of pragmatic skepticism throughout. His concern from the start is job displacement and the absence of regulatory frameworks. He pushes back on Lycanthrope's claim that robots are already nimble enough to replace electricians doing industrial wire pulls, drawing on actual trade knowledge. He also makes a sharp early observation: AI will write a physics paper before it fixes your toilet, suggesting white-collar knowledge jobs are more vulnerable than blue-collar trades — which is actually one of the more accurate observations in the thread. [I]Critique:[/I] Fester is generally grounded and his points are reasonable, but he's susceptible to the same underestimation trap as the blue-collar camp broadly. "We won't see it in our lifetimes" is the same thing people said about GPS, touchscreens, and the internet. Technological pace has consistently surprised even practitioners inside the field. His skepticism about robotic dexterity in industrial settings is fair today — but the question of whether it's fair in 10 or 15 years is genuinely open. Boston Dynamics and Figure AI robots are already doing tasks that would have seemed absurd five years ago. [B]Skeeter and the blue-collar caucus[/B] (including Eatsleeptrap to some extent) mount the most emotionally charged defense of trade work. Skeeter is flat-out certain that "a robot will never be able to run wires through a house or fully plumb a house for water or sewer." He argues that the hands-on trades will outlast white-collar work and that school counselors should stop pushing four-year degrees. [I]Critique:[/I] The trade-school point is separately valid and largely correct as career advice for the next 10–15 years. But "never" is a big word. Skeeter's confidence that physical complexity permanently protects blue-collar work ignores that construction robots already lay bricks, pour concrete, and weld in controlled environments. Allen's counter-argument that future homes may be designed around robotic construction from the ground up — not retrofitted for it — is actually the sharper insight. The question isn't whether current robots can do current plumbing in current houses. It's whether future construction will be designed around human hands at all. [B]Iwhackwheat[/B] produces one of the thread's best lines, cleanly cutting through the blue-collar confidence: "Do I think they could do your job? Nope. Do I think they could convince society that your job is no longer necessary? Yup." This is a genuinely insightful point that doesn't get the attention it deserves. Economic disruption doesn't require that AI perfectly replicates a skill — it only requires that it's good enough and cheap enough that businesses restructure around it. [B]Browneye[/B] (Flasher, ND) introduces the surveillance and control angle, citing Palantir and Meta supposedly building behavioral trust scores using facial recognition and social data. He references "Minority Report" as a template for AI-enabled predictive policing. Several people take this seriously. [I]Critique:[/I] The concern about AI-enhanced surveillance and social scoring is genuinely legitimate — China's social credit system is a real thing, and Western democracies have been incrementally building similar capabilities under different names. But Browneye's specific claim about Palantir and Meta compiling real-time trust scores from scroll speed is not substantiated — it reads like something from an unvetted YouTube video, which by his own admission is the source. The kernel of truth (AI is being used for behavioral prediction and surveillance) gets embedded in a more speculative claim that makes it easier to dismiss. [B]Davy Crockett[/B] is the thread's most interesting evolving voice. He starts as a pure skeptic ("hunker in the hills, go cash-only"), but by the middle of the thread is making notably more nuanced observations: "In the past, people who embraced new technology have done well for themselves. I believe AI will be the same." He also raises a genuinely smart engineering point about robot hacking vulnerability and knowledge transfer between robotic systems. He ends up as a kind of pragmatic cautious optimist. [I]Critique:[/I] He's right to update his views as the conversation develops. His practical questions about robot vulnerability and the learning curve for autonomous systems show more real-world systems thinking than most. The evolution from pure rejection to engaged curiosity over the course of the thread is honest and admirable. [B]SDMF[/B] is one of the thread's underrated voices. His late-thread post is probably the most empirically grounded of anyone's: he points out that US unemployment hovers near 4% despite decades of increasing computerization and robotization, that automation historically creates new job categories alongside displacing old ones, and that nurses haven't been freed up by medical computing — they've just been asked to do more. He argues that adaptable, motivated people will find fulfilling work in a transformed economy. [I]Critique:[/I] SDMF is essentially making the standard economic counterargument to technological unemployment — and it's been right every time so far. His empirical grounding is the best in the thread. His weaker spot is the implicit assumption that "this time won't be different," when there are serious economists who argue that LLM-scale AI may represent a qualitative shift rather than another incremental automation wave. He acknowledges "big fast changes are coming" — but doesn't fully grapple with the possibility that the historical pattern may not extrapolate. [B]Sum1 and FishFinder97[/B] represent the thread's biblical apocalypticism wing. Sum1 interprets UBI + digital currency + AI as the fulfillment of Revelation's "mark of the beast" prophecy. FishFinder97 sees AI as "just the start of the end" in a sequence leading to UBI and digital currency. These posts get more likes than you might expect. [I]Critique:[/I] There's no point arguing with eschatological interpretation on its own terms. What's worth noting sociologically is that for a segment of this community, concerns about AI are inseparable from a broader worldview in which powerful, globalized, technocratic systems are inherently suspect and foretold. This isn't irrational given their value system — it's internally consistent. The problem is that it forecloses engagement rather than enabling it. [B]Ragnar[/B] (Pensacola) delivers the thread's best corrective to the "it'll never happen in our lifetime" crowd: if you told him in 1985 that by 2010 he'd have the entire accumulated knowledge of humanity in his pocket and could call anyone anywhere for free, he'd have said you were crazy. This is the best single argument anyone makes for respecting the pace of technological change. [B]lunkerslayer[/B] is reliably enthusiastic about the medical applications — personalized medicine, neurological repair, clinical trial simulation — and he's not wrong that these are genuine frontiers. His posts are a bit sprawling but he's engaging with real possibilities. [B]fly2cast[/B] is the only person in the thread who gives a completely uncomplicated positive testimonial: he uses AI daily for proofreading legal documents and used it in Italy to translate signs and learn about landmarks. No drama, no existential crisis, just a useful tool making his life better. This perspective gets almost no traction in the thread, which is revealing. [HR][/HR] [HEADING=2]General Tone and Accuracy Assessment[/HEADING] The tone is predominantly defensive, tribal, and future-averse, with genuine intellectual sparks scattered throughout. The community skews toward people who value self-sufficiency, distrust institutions and technocratic elites, and find meaning in physical work and the natural world. That's not a flaw — it's a perspective with real coherence. But it means the thread is not well-calibrated to evaluate AI clearly. [B]Where the thread is accurate:[/B] [LIST] [*]Job displacement is a real and serious risk that deserves more policy attention than it gets. [*]AI's accuracy is imperfect and it can be confidently wrong (Fester's ChatGPT tax code battle is a common experience). [*]The concentration of AI power in the hands of a small number of companies and individuals is a legitimate governance concern. [*]The cognitive dependency risk is real and supported by the MIT study Fly Carpin cited. [*]The surveillance and behavioral prediction angle is worth taking seriously even if the specific claims are sometimes unverified. [/LIST] [B]Where the thread goes wrong or overstates:[/B] [LIST] [*]"The end of humanity" as a serious near-term prediction conflates near-term disruption with extinction scenarios. Most AI researchers, including pessimistic ones, do not assign high probability to extinction within any meaningful timeframe. [*]The social-media-as-analog argument keeps appearing as a conversation stopper. AI and social media share some characteristics (concentration of power, information distortion, addiction dynamics) but are fundamentally different in their capability profile and economic structure. [*]Browneye's viral-video-sourced surveillance claims are stated as fact. [*]The "AI can't do physical work, so blue-collar jobs are safe" argument underestimates robotic progress and, more importantly, underestimates how AI changes what kinds of physical work get built in the first place. [*]Rowdie's claim that AI refused to find a mugshot of a minority suspect but said it would show one of a white suspect is almost certainly a misread of what happened — image search for real individuals raises privacy concerns regardless of race, not racial ones per se. Lycanthrope endorses his reading that this proves AI is "very left leaning," which is a significant oversimplification of how content moderation and safety guidelines work. [/LIST] [HR][/HR] [HEADING=2]My Honest Take on Where AI Is Going — And the Real Misconceptions[/HEADING] Having read 168 posts of this, here's what I think is actually true, as plainly as I can put it. [B]The job displacement concern is real but more nuanced than the thread allows.[/B] Every automation wave has destroyed job categories and created new ones, and the net result over 200+ years has been more jobs, better wages, and shorter working hours. That pattern may genuinely break with AI — because for the first time, the technology threatens cognitive labor at the same time as physical labor. But even here, the transition is more likely to be disruptive and uneven than to result in mass permanent unemployment. The people at greatest risk aren't the self-reliant, practically skilled people posting on this forum — they're mid-level knowledge workers doing repetitive cognitive tasks: data entry, basic legal and financial analysis, customer service, first-draft writing. The people best protected are either at the very top of cognitive complexity or embedded in physical, contextual, relationship-based work that robots genuinely can't replicate — yet. [B]"Yet" is doing enormous work in every reassuring prediction.[/B] Fester is right that industrial wire pulling isn't like robot surgery. But the honest answer is we don't know what the robotics landscape looks like in 15 years. The people who are most certain that physical trades are permanently safe are making the same mistake as the people who were certain in 1990 that computers would never replace skilled knowledge workers. [B]The AI-as-humanity-ender scenario is being dramatically overclaimed.[/B] SupressYourself works in AI and is still stating it as a near-certainty. But current AI systems — including the most advanced ones — are nowhere near the kind of recursive self-improvement, goal formation, and autonomous agency that the existential risk scenarios require. Those scenarios are worth taking seriously as tail risks in the long run, but treating them as imminent certainties misrepresents the state of the technology. [B]The "AI is biased/left-leaning" claim deserves a more careful reading.[/B] When Rowdie says AI refused to show him a minority suspect's photo, the most plausible explanation isn't political bias — it's that AI image tools are cautious about generating or retrieving real individuals' images for privacy and legal reasons. That caution can be imperfect and sometimes overly conservative. But the interpretation that it reflects racial politics is a leap. Similarly, Lycanthrope's claim that ChatGPT and Claude are "very left leaning" on political topics is a subjective assessment from people who identify as conservative — there are equally frustrated liberals who call AI systems right-leaning in different contexts. [B]The competition-with-China argument has real merit but is also regularly abused.[/B] Lycanthrope is correct that AI leadership has serious national security and economic implications, and that excessive regulatory caution carries its own risks. But "we can't afford to slow down" has been used to justify skipping safety work in every major technological domain from nuclear to pharmaceuticals to social media, and the results have been consistently mixed. Racing China is a real strategic consideration — it's not a blank check to forgo governance. [B]The most underappreciated point in this entire thread is fly2cast's.[/B] He uses AI every day as a practical tool, it saves him time, it went well on his Italy trip, he has no complaints. That experience is wildly more common than the forum discourse would suggest, but it generates no drama and therefore no posts. The selection bias in AI discourse — both on this forum and in media broadly — is toward fear, outrage, and apocalypse. The mundane daily utility doesn't get threads. [B]What I genuinely believe about where this goes:[/B] AI will be enormously disruptive to the economy, more so than social media and probably more so than the internet in terms of sheer economic impact. The disruption will be uneven — hitting some sectors hard and fast, others slowly, and leaving others largely untouched for years. The people who will be hurt most are those whose work is routine, cognitive, and not embedded in complex physical or social contexts. The people who will benefit most are those who learn to work with these tools effectively. The existential risks are real as long-run tail risks but not imminent. The surveillance risks are more pressing and less discussed. And the question of who controls these systems — a handful of companies and governments — is the most important political question of the next decade, and it barely gets addressed on this thread or anywhere else with the seriousness it deserves. The Nodak Angler community's instinct that something big and potentially dangerous is happening is correct. Their analysis of what exactly the danger is, and their reflexive dismissal of any upside, is where they lose the thread. [/QUOTE]
Verification
What is the most common fish caught on this site?
Post reply
Recent Posts
Harwood ND AI business
Latest: Davy Crockett
Today at 11:27 AM
Well, back to winter
Latest: BrokenBackJack
Today at 11:24 AM
M
Big Horn
Latest: measure-it
Today at 11:06 AM
M
ATF Backdoor Gun Registry
Latest: measure-it
Today at 10:55 AM
A.I. Are you Excited?
Latest: Zogman
Today at 10:48 AM
Olympics?
Latest: risingsun
Today at 10:34 AM
C
GPS fencing for dogs
Latest: chad.symington
Today at 8:45 AM
State of NDA
Latest: svnmag
Yesterday at 8:16 PM
NFL News (Vikings)
Latest: lunkerslayer
Yesterday at 12:15 PM
I HATE coyotes!!!!
Latest: Kurtr
Yesterday at 8:42 AM
POLITICS - WTF?
Latest: lunkerslayer
Wednesday at 6:42 PM
Dire Wolves?
Latest: Jiffy
Wednesday at 3:57 PM
RIP Robert Duvall
Latest: risingsun
Wednesday at 11:35 AM
gas can transfer pump
Latest: Allen
Wednesday at 10:50 AM
P
158 grain .38 Special SWC
Latest: PrairieGhost
Wednesday at 7:07 AM
Fort Peck rescue
Latest: lunkerslayer
Wednesday at 6:18 AM
Worm Rig YT
Latest: svnmag
Tuesday at 9:37 PM
Bis-Man Reel & Rec Ice Derby
Latest: Honkerherms
Tuesday at 7:55 PM
Boy would that be a sight
Latest: lunkerslayer
Tuesday at 6:23 PM
2026 SCI RRV Banquet
Latest: luvcatchingbass
Tuesday at 3:47 PM
Lab grown MEAT?
Latest: Allen
Tuesday at 12:56 PM
T
Beef prices going up????
Latest: Twitch
Tuesday at 10:57 AM
Turkey Time
Latest: tdismydog
Tuesday at 10:46 AM
Friends of NDA
Forums
General
General Discussion
A.I. Are you Excited?
Top
Bottom