GPT might be an information virus (2023)

(nonint.com)

122 points | by 3willows 2 days ago ago

108 comments

  • karaterobot 2 days ago ago

    I don't think the results of the last two decades of mass information dissemination have been all that great. People didn't trust anything on the internet before GPT, and the internet was already a cacophony of screeching voices too. Smart people were already decoupling from social media, and moving meaningful interaction to enclaves well before ChatGPT became a factor. ChatGPT did not ruin the internet, it was unleashed on a broken internet to begin with.

    If, as this article predicts, the result of GPT is that we don't trust information from the internet, and everybody moves away from it, that's great. Traditional journalism was better, as it turns out. Talking mainly to your friends rather than millions of people was better, as it turns out. I'm ready to go back to that, should it come to it.

    But it won't. This essay is making a catastrophic prediction that won't come to pass. Whatever the future is, it's going to be something nobody is predicting yet. It'll be better than the doomsayers predict, and worse than what the cheerleaders say. It will be nothing like a simple magnification of the present concern over epistemology.

    • dns_snek a day ago ago

      You're confusing social media with the internet at large. "Traditional journalism" isn't a replacement for a niche blog writing about highly specific, highly detailed technical topics. What the blog post is describing isn't some bold prediction, it's already happening.

      Today it's far more difficult (and personally quite frustrating) to find information written by actual people with actual experience on any given topic than it was 5 years ago, because for every one of those articles there are now 20 more written by LLMs, often outranking them. This frustration is only going to grow as the LLM proliferation continues.

      • Henchman21 21 hours ago ago

        Do you think this is also true of bookstores and libraries today?

  • andy99 2 days ago ago

    I think it is an information virus, but differently - it's homogenized everything, and made people dumber and lazier. It's poisoned public and professional discourse by reducing writing and thinking from the richness of humanity to one narrow style with a tiny latent space, and simultaneously convinced people that this is what good writing looks like. And it's erased thought from board classes of endeavor. This virus is much worse than the relatively benign symptoms described in the article.

    • A4ET8a8uTh0_v2 2 days ago ago

      Like most progress, it made some things easier ( and some things worse as a result ). What I do find particularly fascinating is that it is doing that even in professions that should know better ( lawyers, doctors ). That my boss uses it is no surprise me though. I always suspected he never really read my emails.

      • kldg 2 days ago ago

        I've definitely been surprised by how it's being used; it's replacing people in places I don't think (even as a closet AI/LLM enthusiast) AI should ever be used: elder care, customer support (even on phone lines), for homework grading. -But I shouldn't have been so surprised, because some were already using robots for these tasks (or maybe not robots explicitly, but making CSRs/similar stick to scripts); my daughter was taking college placement tests recently -- even the essay questions were graded by software, and she's watched by software as she writes it. These things still seem to me like jobs which fundamentally require a human touch -- it's been especially amazing to me teachers are using AI to detect AI; you can't determine whether or not a robot wrote it, but you can assign a grade to it? Huh??

        I have a very vocally anti-AI friend, but there is one thing he always goes on about that confuses me to no end: hates AI, strongly wants an AI sexbot, is constantly linking things trying to figure out how to get one, and asking me and the other nerds in our group about how the tech would work. No compromises anywhere except for one of the most human experiences possible. :shrug:

        • sho_hn 2 days ago ago

          I think to me the weirdest and most unexpected (not so much in retrospect) AI use is that people will use it all day long to navigate chat conversations with their boyfriends/girlfriends, having it suggest romantic replies, etc.

          I expect people to be lazy, but that we'd outsource feelings was surprising.

          • quesera a day ago ago

            I have a family member who worked in a Hallmark store when they launched a custom card printing service (in-store, select the cover art and write your own message, printed on a card).

            She says that about 75% of the custom card customers would ask her what they should write for a message.

            She wrote messages of friendship, love, birthdays, graduations, congratulations, sympathy, etc. To support her coworkers on other shifts, she filled an index card box with several dozen canned "custom" messages for Hallmark customers to choose from.

            Somewhat separately, she reports that working at Hallmark is a good way to make a misanthrope out of an intelligent teenager. To which I reply that most of the intelligent teenagers I knew were already misanthropes! But the stories she tells, particularly of Christmas ornament hysteria, are hysterical.

          • johnisgood a day ago ago

            Hmm, LLMs have helped me understand the perspective of my girlfriend better, and taught me to be a better listener and how to not act in various scenarios. I do not really use LLMs to write replies to my girlfriend, however. I have used it before to make some corrections, but the essence remained, and it came from me.

        • A4ET8a8uTh0_v2 2 days ago ago

          It made me chuckle, because I absolutely buy the anecdotal anti-AI friend. On the other hand, if he applied himself, maybe he could figure it out. I honestly can't say I am not intruiged by the possibility.

          • nunez a day ago ago

            Wait til he finds /r/LocalLLaMA

            Sexbots are the raison d'etre for that sub

      • melagonster a day ago ago

        Their way is similar to programmers. The proficient user can distinguish whether the output is correct at first look.

      • doctorpangloss 2 days ago ago

        People want AI lawyers and they really invented AI judges.

        • arthurcolle 2 days ago ago

          Usually judges start out as lawyers

    • majormajor 2 days ago ago

      https://www.media.mit.edu/publications/your-brain-on-chatgpt... This seems relevant here in a "the results agree" way.

    • kordlessagain 2 days ago ago

      People have always tended toward taking shortcuts. It's human nature. So saying "this technology makes people dumber or lazier" is tricky, because you first need a baseline: exactly how dumb or lazy were people before?

      To quantify it, you'd need measurable changes. For example, if you showed that after widespread LLM adoption, standardized test scores dropped, people's vocabulary shrank significantly, or critical thinking abilities (measured through controlled tests) degraded, you'd have concrete evidence of increased "dumbness."

      But here's the thing: tools, even the simplest ones, like college research papers, always have value depending on context. A student rewriting existing knowledge into clearer language has utility because they improve comprehension or provide easier access. It's still useful work.

      Yes, by default, many LLM outputs sound similar because they're trained to optimize broad consensus of human writing. But it's trivially easy to give an LLM a distinct personality or style. You can have it write like Hemingway or Hunter S. Thompson. You can make it sound academic, folksy, sarcastic, or anything else you like. These traits demonstrably alter output style, information handling, and even the kind of logic or emotional nuance applied.

      Thus, the argument that all LLM writing is homogeneous doesn't hold up. Rather, what's happening is people tend to use default or generic prompts, and therefore receive default or generic results. That's user choice, not a technological constraint.

      In short: people were never uniformly smart or hardworking, so blaming LLMs entirely for declining intellectual rigor is oversimplified. The style complaint? Also overstated: LLMs can easily provide rich diversity if prompted correctly. It's all about how they're used, just like any other powerful tool in history, and just like my comment here.

      • majormajor 2 days ago ago

        We could wait for further studies, but some already exist: https://www.media.mit.edu/publications/your-brain-on-chatgpt...

        You say it's human nature to take shortcuts, so the danger of things that provide easy homogenizing shortcuts should be obvious. It reduces the chance of future innovation by making it more easy for more people have their perspectives silently narrowed.

        Personally I don't need to see more anecdotal examples matching that study to have a pretty strong "this is becoming a problem" leaning. If you learn and expand your mind by doing the work, and now you aren't doing the work, what happens? It's not just "the AI told me this, it can't be wrong" for the uneducated, it's the equivalent of "google maps told me to drive into the pond" for the white-collar crowd that always had those lazy impulses but overcame them through their desire to make a comfortable living.

      • momento 5 hours ago ago

        "The style complaint? Also overstated: L[...]"

        This is how I know this comment was written by an AI.

    • joegibbs 2 days ago ago

      It’s the latest in a series of homogenising inventions - the printing press, radio, television, the internet - that will probably result in people of the future speaking and thinking more similarly than today. First went minor languages, then dialects, now regional differences within languages. Next will probably be the difference between different English accents - I think by 2100 English speakers will all be speaking with a generically American accent no matter where they are on earth. Then next will probably be other national languages - 90% of Swedes and Dutch people already speak English.

      • monkaiju a day ago ago

        The printing press (and the others listed) aren't homogenizing, they're if anything tools of diversification. They allowed far more novel ideas to be presented and distributed than before, AI on the other hand "distils" and "reduced" large heterogeneous information into a much more homogenous slop.

    • 3willows 2 days ago ago

      Perhaps that is the real danger. Everyone except a small elite who (rightly) feel they understand how LLMs work would simply give up serious thinking and accept whatever "majority" opinion is in their little social media bubble. We wouldn't have the patience to really engage with genuinely different viewpoints any more.

      I recall some Chinese language discussion about the experience of studying abroad in the Anglophone world in the early 20th century and the early 21st century. Paradoxically, even if you are a university student, it may now be harder to break out of the bubble and make friends with non-Chinese/East Asians than before. In the early 20th century, you'd probably be one of the few non-White students and had to break out of your comfort zone. Now if you are Chinese, there'd be people from a similar background virtually anywhere you study in the West, and it is almost unnatural to make a deliberate effort to break out of that.

      • 3willows 2 days ago ago

        The point being: when you find someone who is tailoring all his/her/its attention to you and you alone, why bother talking to anyone else.

        • Hupriene 2 days ago ago

          That's some real obsessive stalker logic there.

          • lazide a day ago ago

            other side. co-dependent.

            • iszomer a day ago ago

              Broadly speaking, this would be how I view the widening diplomatic gap between the CCP and Taiwan. Not the DPP, not the KMT, Taiwan.

    • computerex 2 days ago ago

      To be honest I think you and others over play it. ChatGPT and LLM's in general sound pretty corporatey. A lot of the at least English written text online is pretty homogenous in style.

    • cal_dent a day ago ago

      I think there is some truth this. But there is also another plausible scenario, where styles now change far quicker than we probably expect. We as a society get bored after an x amount of time. That time has potentially shortened now as the pace of new output generated has increased so much. What probably would typically have taken lets say a decade and a half for people to get bored of (think about how all coffee shops started to copy Friends, and then the instagram minmial cafe aesthetic became a thing) is probably shorter because LLM means that it'll be oversaturated very quickly.

      The current style & cadence of LLM output is already getting tiring for many so I'd expect a different style to take hold soon enough. And given LLM can mimic any style that is easy enough to do at scale and quickly. Then the cycle commences again until someone comes up with a novel style of writing, that people like, that the LLM dont know yet and then the cycle starts again....

      Edit:

      I also vaguely remember an article around the cultural impact of one of the image creation ai early on, maybe Dall-E if memory serves me well. I remember very little of the article now except a comment an artist made which was along the lines that in a few years the image generation would be so good & realistic, that inevitably a counter culture will emerge around nostalgia for the weird hallucinatory creations it used to make at the start simply because at least it'll be more interesting. In a similar way you get the nolstagia for things like vinyl & handcrafted toys etc. I think about that aspect of it broadly a lot.

    • echo7394 a day ago ago

      The movie Idiocracy comes to mind almost every day for me as of late.

    • sho_hn 2 days ago ago

      What's weird is that so many people shrug this off with "eh, it's what they said about the calculator".

      Which to me is roughly as bad a take as "LLMs are just fancy auto-complete" was.

      I feel it's worth reminding ourselves that evolution on the planet has rarely opted for human-level intelligence and that we possess it might just be a quirk we shouldn't take for granted; it may well be that we could accidentally habituate and eventually breed outselves dumber and subsist fine (perhaps in different numbers), never realizing what we willingly gave up.

      • Nevermark 2 days ago ago

        Our thumbs, ..., our intellect, and especially language, gave us an ecological/economic niche.

        We became a technological species.

        We observed, standardized and mechanized our environments to work for us. That is our niche.

        But then things snowballed in the last couple of centuries. A threshold was crossed. Our technology became our environment, and we began adapting the environment for our technologies direct benefit, for own indirect benefit.

        Simple roads for us at first, then paved for mechanized contraptions. Wires for talking at first, then optimized for computers. We are now almost completely building out a technological world for the convenience and efficiency of the technology.

        And once our technology frees us from dependence on others, a second threshold will be crossed. Then neither others or the technology, will need us.

        I don't see a species of devolving humans, no longer needed by their creations, in a world now convenient for those creations, finding a happy niche.

        If there is a happy landing, it will need to take a different route than that.

        • johnisgood a day ago ago

          As Rust (from True Detective) have said: let us walk hand in hand into extinction. Or something along these lines. :)

      • saghm a day ago ago

        It seems like a stretch to argue that we have any clue what the evolutionary consequences would be for something that's been around only a couple of years. Human-level intelligence took millions of years to evolve even when the lifespans of our ancestors were shorter than they are now, so trying to predict how something so new will affect the biology of future generations seems like it would be pretty much impossible to reason about. Even trying to predict how technology will affect society in a single generation is hard enough, and that's hardly long enough for any noticeable evolutionary changes to our intelligence as a species to become noticeable.

      • audinobs a day ago ago

        I don't know or really care what other people are doing with LLMs.

        I have learned so much the past 2.5 years it is almost hard to believe.

        To say I am getting dumber is just completely preposterous.

        Maybe this would be leading me astray if I had the intelligence of Paul Dirac and I wasn't fully applying my intelligence. The problem is I don't have anything like the intelligence of Paul Dirac.

      • nunez a day ago ago

        People who make that retort forget that the calculator was immensely helpful but _also_ antiquated the need for mental math, which in my opinion is a bad thing. (Everyone should be able to calculate 5% and 10% of numbers, given how easy it is to do)

        • johnisgood a day ago ago

          Well, I suppose many people do not know of these "mental math tricks".

          To get 10% of a number, just move the decimal left: 10% of 40 -> 4.0.

          To get 5% of this number, get 10% first, then halve it. It is the half of 10%, which in this case would be 2.0.

          If you want to do this on a computer / calculator, you simply do: 40 * 0.10 for 10% or 0.05 for 5%. I was a very young kid when I learned to do this on a calculator, and I absolutely loved it!

          • nunez a day ago ago

            Exactly, but when you learn math by calculator, you don't learn these tricks, which brings us to today, where tipping calculators rank pretty well on app stores.

            • colejohnson66 a day ago ago

              What makes tipping calculators even crazier is that your phone already has a calculator.

    • crimsoneer 2 days ago ago

      This is how the church felt about the printing press.

      • nerevarthelame 2 days ago ago

        While the church feared people interpreting information on their own, with LLMs it's the opposite: we fear that most interpretation of information will be done through a singular bland AI extruder. Tech companies running LLMs become the pre-press churches, with individuals depending on them to analyze and interpret information on their behalf.

      • majormajor 2 days ago ago

        The church would've LOVED everyone asking the same one-to-four sources everything. ChatGPT is literally a controllable oracle. Quite the opposite of the printing press.

        "Running your own models on your own hardware" is an irrelevant rounding error here compared to the big-company models.

      • toofy 2 days ago ago

        this would be the opposite. the llm situation may be heading back towards something similar the church age.

        the church did all of the reading and understanding for us. owners of the church gobbled up as much information as it could (encouraging confessions) and then the church owners decided when, how, where and which of that information flowed to us.

        • nunez a day ago ago

          This exactly is the terminal state of big AI: models that only four companies can train (on datasets only they can obtain) who also happen to own all of the ancillary services for accessing those models (because most businesses are really chatgpt in a trenchcoat). Yet the world is begging and kicking down the doors for this, just like they did for social media.

      • XorNot 2 days ago ago

        Who is "the Church" in this analogy?

        • rolph 2 days ago ago

          refers to the gutenberg press, and mass production of printed works, threatening the siloed, ivory towers of knowledge at the time.

          https://en.wikipedia.org/wiki/Printing_press#Gutenberg.27s_p...

          if everyone has a bible, then who needs the church to tell you what it says.

          • ceejayoz 2 days ago ago

            Relying on an AI oracle to think for you is just as bad as relying on a priestly one.

          • LtWorf 2 days ago ago

            > if everyone has a bible, then who needs the church to tell you what it says.

            Clearly, all the protestants who burned more witches than the catholics ever did, and kept at it for centuries after the inquisition had stopped. But that's just my opinion here.

        • brookst 2 days ago ago

          People who consider themselves exceptionally smart, who are well educated and write well, who only ever need to communicate in their native tongue ye, and who have the luxury of investing time in developing a personal writing style.

          It is a good analogy. There is great concern that the unwashed masses won’t know how to handle this tool and will produce information today’s curators would not approve of.

          • andy99 2 days ago ago

            It's an extremely poor analogy, the original point is its an information virus telling people what to think (or thinking for them). It's the exact opposite of the allowing people to think for themselves that came with the enlightenment, it's back to the days of the "church" (someone else) telling people how to think and literally writing their words for them.

      • mensetmanusman 2 days ago ago

        This analogy is going places.

    • makk 2 days ago ago

      It hasn't homogenized everything. It's further exposed humans for who they are. Humans are the virus.

      • mensetmanusman 2 days ago ago

        (1999)

      • jvm___ 2 days ago ago

        Agent Smith had it right when he was interviewing Morpheus in the Matrix.

        • goatlover 2 days ago ago

          And then ironically Smith became a virus threatening both humans and machines. However, Agent Smith was the Oracle's tool to force a treaty between the machines and humans. As the Architect said at the end of Revolutions, she played a dangerous game.

          But it was the only way forward to a new equilibrium.

  • ayaros 2 days ago ago

    We're going to have to go in the opposite direction and rely on directories or lists of verified human-made/accurate content. It will be like the old days of yahoo and web-indexes all over again.

    • DaveZale 2 days ago ago

      A few years ago, some talk briefly circulated about local internet efforts, possibly run by public libraries.

      Local news coverage has really suffered these past several years. Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

      That approach might be a good start. Use a cloud service that forbids AI bot scraping to protect copyright?

      • ceejayoz 2 days ago ago

        > Wouldn't it be great to see relevant local news emerge again, written by humans for humans?

        That sounds a lot like Nextdoor. With all the horrors that come with it.

        • ajmurmann 2 days ago ago

          Does Nextdoor actually have the local news? The other day I kept hearing sirens outside for hours. I hoped to find on Nextdoor what was going on but I mostly got lost-cat posts and people trying to sell stuff like handyman services. This is how it goes most times I check on Nextdoor. Maybe it depends on the area.

          • nunez a day ago ago

            It's not bad for hyper-local news in the absence of town newspapers (mostly gone) though this greatly depends on the communities that you subscribe to

        • DaveZale a day ago ago

          Nextdoor is terrible, agreed.

          No, we want real news! With some editorial oversight.

          Local reddits were good for a while, but the bots and human moderators make their own rules. It's not consistent.

          A nice place to start could be with old-school weather reports, where we can learn something again. It's all so superficial these days.

          Local events, local political issues with objectivity, history and future outlook, the list goes on and on. Maybe muzzle the negative talk with strict categories and guard rails to avoid another Nextdoor?

      • ayaros 2 days ago ago

        This doesn't seem to be structured differently than a standard-fare social media app. All the same issues with human verification on those apps would apply to this too.

        Unless you mean a platform only for vetted local journalists...

        • righthand 2 days ago ago

          Tie the account to the Library Card and then you can open it up to anyone.

    • Footprint0521 2 days ago ago

      I feel like SEO trash has made this a must have for me for the past few years already. If it’s not stack overflow, Reddit, or stack exchange, I’m wasting my time

      • ayaros 2 days ago ago

        Or MDN, which is yet another site that seems to be constantly ripped off by parasitic AI-generated SEO sites...

    • MPSimmons 2 days ago ago

      I had the thought the other day that one of the most valuable things a human-driven website could offer would be a webring linking to other human-driven websites

      • JKCalhoun 2 days ago ago

        I'm a fan of bringing back Web Rings.

        Perhaps a site could kick off where people proposed sites for Web Rings, edited them. The sites in question could somehow adopt them — perhaps by directly pulling from the Web Ring site.

        And while we're at it, no reason for the Web "Ring" not to occasionally branch, bifurcate, and even rejoin threads from time to time. It need not be a simple linked list who's tail points back to it's head.

        Happy to mock something up if someone smarter than me can fill in the details.

        Pick a topic: Risograph printers? 6502 Assembly? What are some sites that would be in the loop? Would a 6502 Assembly ring have "orthogonal branches" to the KIM-1 computer (ring)? How about a "roulette" button that jumps you to somewhere at random in the ring? (So not linear.) Is it a tree or a ring? If a tree, can you traverse in reverse?

        • ayaros 2 days ago ago

          Web rings are a thing I've been thinking about a bit. Anyone know any good ones? There are a couple I reached out to for one of the projects I'm working on, to get my site on them, but I never got responses. There are also some webrings I've come across that have died or been retired. :(

          • JKCalhoun 2 days ago ago

            Yeah, the dynamic (remote) web ring server I am conceiving would handle dead the links in the ring.

            There's still the buy-in problem though. Convincing the owners of the sites you want in the ring to modify their HTML to dynamically fetch and display the ring links.

      • ayaros 2 days ago ago

        That's something I really enjoy about web 1.0: links pages. We need to bring back the days when every site had a giant list of links to other sites. I don't care if half of them end up as dead links. This is part of what made the web fun. You'd come across a site, see what it had to offer, and then you'd check the links page and find five, ten, or 20 other sites offering similar things. No need for algorithms tracking your every move to recommend things to you... the content itself would do that.

    • ayaros 2 days ago ago

      (To clarify, I'm not suggesting this is necessarily a bad thing)

  • johnnienaked 2 days ago ago

    It gives me recollection back to the Simpsons episode where Itchy and Scratchy writers go on strike. What follows was a beautiful scene of children rubbing their eyes in unfamiliar sunlight as they're forced to go outside, making up games, playing on playgrounds, all while Beethovens Pastorale hums in the background.

    I'm all for it. Let big tech destroy their cash cow, then maybe we can rebuild it in OUR interest.

  • jcalx 2 days ago ago

    Alternatively stated:

    > The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    > Viruses do not arise from kin, symbionts, or other allies.

    > The signal is an attack.

    ―Blindsight, by Peter Watts

    • TZubiri 2 days ago ago

      Is that the sci fi novel about an alien race that is annoyed and feels attacked by noise?

      • lelandbatey 2 days ago ago

        Nope it's the book about the concept of a true turing-machine like alien being which has no "consciousness", it's mechanistic but insanely complex, like the Borg but not even assimilating; it's like a very very sophisticated "grey goo" nanobot scenario

  • d4rkn0d3z a day ago ago

    Intelligence itself may be a virus, literally. The chemical activity in your brain is not that unlike viral activity. The reason humans can do math may be that some kind of salamander-like creature millions of years ago contracted a brain virus, that virus happened to produce behavioral effects that helped it spread and ultimately become symbiotic. We may be little more than a pitched battle between what we would call bacteria, viral load, and the cells that we would like to think of as our own but which are really combinations of earlier forms of life that joined together for practical reasons.

    On this view, we are not exquisitely designed machines but rather accidental pitched battles occurring in nature. The question is does this ontological view produce new predictive capacity? Can you see yourself as a being entirely driven by microscopic life, rationalizing everything after the fact so that you are the master of your destiny? Is intelligence something that you partake in rather than possess? What is technology and what does it want from us?

    • interstice a day ago ago

      This triggered two main thoughts for me. First - this virus centric take is not so different to the one taken in Sapiens by Yuval Noah Harari in which he claims that we didn’t so much domesticate crops as become domesticated _by_ crops. While I think it’s healthy to consider less human centric views of what we are, there as here I think the value of questioning the intent of entities that don’t have end goals may not lead to very profound answers. The second thought is that I once read that the activity of our brain is dancing on the edge of chaos, and that either direction causes it to cease functioning. Similarly our immune systems are dialled close to the limit, to the point where it can take itself and its owner out by accident. All this to say our existence is already at or near the crossover between so many gradients, which most likely are in essence the pitched battles you’re referring to.

      • d4rkn0d3z 6 hours ago ago

        Its a view of life as unstable equilibria.

  • dang 2 days ago ago

    Discussed (a bit) at the time:

    GPT Might Be an Information Virus - https://news.ycombinator.com/item?id=36675335 - July 2023 (31 comments)

    GPT might be an information virus - https://news.ycombinator.com/item?id=35218078 - March 2023 (1 comment)

  • cainxinth a day ago ago

    Books were also an information virus. They spread everywhere, took root, reduced the need for memorization and oral traditions, and changed the way we think forever.

  • frahs 2 days ago ago

    I'm unconvinced -- Certainly this will happen gradually, and there will be widespread public support for a solution. It's not to hard to imagine making online reviews or "high quality content" require verification tied to some per-citizen identification code (or asymmetric key). Maybe this makes it harder post anonymously on the internet, but at the very least we won't have the issue of proving identity.

    Just wish we had a competent government to handle the upcoming transition. But even an incompetent one can have smart employees under it, and can give them the funding they need to accomplish this.

  • undefined 2 days ago ago
    [deleted]
  • SrslyJosh 2 days ago ago

    > * For what it’s worth – I am personally happy that a company that is committed to “doing it right” is spear-heading this change.

    Seems to be working out great so far. (=

  • androng 2 days ago ago

    i heard from an artist that Pinterest is full of AI-generated stuff now so artists looking for references have to go back to physical books for art references

  • undefined 2 days ago ago
    [deleted]
  • Havoc 2 days ago ago

    It’ll certainly get more noisy but I don’t quite buy the total communication collapse implied here.

    Humans still have an inherent need to be heard and hear others. Even in a pretty extreme scenario I think bubbles of organic discussion will continue

  • gfody 2 days ago ago

    I think we should prefer content-farm content to be replaced w/generated content. ultimately it'll compress back to the prompt that generated it and that'll be easier to filter out.

  • iluvlawyering 2 days ago ago

    And what if intelligence as measured by computation complexity reaches a natural limit inevitably marked by a detached compassionate disposition? Does the cure then become the virus?

  • t1234s 2 days ago ago

    Is there any way these LLM tools watermark their output in a way that keeps them from re-training on output generated from the same LLM?

    • konfusinomicon 2 days ago ago

      strategic em-dash placement is my guess but only the machine knows the code

    • StarlaAtNight 2 days ago ago

      Nice try, AI!

  • anthk 2 days ago ago

    There are public sources of information such as a curated WIkipedia, open content from Kiwix, Gutenberg Math books and OpenStreetMap for maps. Better, you can download offline and curated version of these so anyone can have a working snapshot anytime. That's good to avoid future AI tamperings. As long as these as AI free, we are potentialy in the right direction.

    • boredatoms 2 days ago ago

      We can only trust a snapshot from pre AI years, eventually everything will be contaminated

      • brookst 2 days ago ago

        s/AI/internet

  • mwkaufma 2 days ago ago

    "Outside of the fate of the web, I see GPT as a monumental force of good." [Citation Needed]

    • undefined 2 days ago ago
      [deleted]
    • thomashop 2 days ago ago

      The rest was fine without citations, but the part you disagree with needs them?

  • SamPatt 2 days ago ago

    >There’s a related problem that will occur: as AI-generated information drowns out human-generated information, humans will simply stop producing content.

    Stopped reading here.

    No, humans won't stop generating content. There's no reason to believe this is inevitable.

  • henriquegodoy 2 days ago ago

    IMO AI as a whole is just the catalyst

  • tomlockwood 2 days ago ago

    I want to suggest that the virus is even more insidious, and is an organism that feeds on VC money, and it is evolving via a substrate of human programmers to become more efficient at consuming it. And like an organism evolving towards survival, it gives no shits about the utility generated in return for the thing it eats.

    And, as time goes on, it'll get more efficient at the consumption and waste less and less energy on the generation of utility. It is an organism that needs servers to feed and generates hype like a deep-sea monster glows its lure.

  • aspenmayer 2 days ago ago

    Curious Yellow, anyone?

    https://en.wikipedia.org/wiki/Glasshouse_(novel)

    > "Curious Yellow is a design study for a really scary worm: one that uses algorithms developed for peer-to-peer file sharing networks to intelligently distribute countermeasures and resist attempts to decontaminate the infected network".

    Hat tip to HN user cstross (as I discovered the idea via Charlie’s blog):

    http://www.antipope.org/charlie/blog-archive/October_2002.ht...

    These topics were first brought to my attention through his amazing novel Glasshouse. I’ve had the pleasure of having my first edition copy of the book signed by the author, and I then promptly loaned it indefinitely to a friend, who then misplaced it. The man himself is a friendly curmudgeon who I am happy to have met, and I have enjoyed reading about the future through his insights into the past and present.

    Also I must acknowledge Brandon Wiley, who wrote the inspiration for Curious Yellow as far as I can tell.

    https://blanu.net/curious_yellow.html

  • 4b11b4 2 days ago ago

    virus is a good way to think about the effect radius

  • hayden_dev 2 days ago ago

    "might"

    • anthk 2 days ago ago

      You are right. The web it's already rotten. There are tons of AI generated articles on supposed serious news sites. These will be worse over time.

    • DrammBA 2 days ago ago

      "2023"

  • jaimsam 2 days ago ago

    [dead]

  • hnpolicestate 2 days ago ago

    The same thing could have been said about the computer in general and masses adopting the personal computer.

    That's how I view LLM's now. They are what follows computers in the evolution of information technology.

  • resters 2 days ago ago

    It’s a method of storing information that makes it far more useful than previous methods.

    Sure we can idealize feats of the human brain such as memorizing digits of pi. LLMs put more human behavior into the same category as memorizing digits of pi, and make the previously scarce “idea clay” available to the masses.

    It’s not the same as a human brain or human knowledge but it is still a very useful tool just like the tools that let us do maths without memorizing hundreds of digits of pi.