Why is so much of generated AI art so... literal? The cover art of this PDF literally spells out what the graphics are supposed to represent. The vast majority of AI visuals on LinkedIn are the same way. If this is what's in store as the future of art, at least commercial art -- feels like a huge step backwards if I'm being honest.
And anyway, what's the point of generating a massive tome like this on a topic evolving as fast as agentic software? Sure it will be outdated within months, if not weeks...
I’m guessing because it’s not produced by artists or people who has an eye for art generally.
When it comes to the book, changes are 80% is written by AI. I mean lots of content produced just pure AI, I’m following some AI subreddits and majority of the posts very obviously generate with couple of prompts, they don’t even bother styling while copy pasting. I’m really struggling to read online content recently.
Could it be since a lot of the data is trained on captions? At least if I'm remembering correctly, that's what they use to create the association between what's seen and what's said.
Partly it's a byproduct of the way that prompting works. Partly it's that the majority of people generating content with AI are not skilled at conceptualizing imagery in the way creative professionals are. I think it's moreso the latter.
I skimmed this; it isn't terrible content (even though many parts are clearly AI written).
But I don't understand the purpose of this book. Is it an educational material, a speculative fiction, or an essay trying to convince the reader of something?
Because if you wanted any of these things, you could literally skip the book and go straight to the AI that will give it to you, tailored for your project.
This isn't a criticism, more a philosophical question I'm asking myself after 25 years of coding.
Book ends with "There’s a fundamental truth
about this transformation that no book can fully convey: work-
ing with AI teammates is something you must experience to
understand"
For writing with an LLM you really have to be precise and honest with your AI usage and I don't think the disclosure here does a good job at that. If you sample a few random pages there are huge shifts in style and tone.
The future where your AI expands your sentence into a few paragraphs that my AI distills down into a sentence sucks just send me your rough draft
I'd rather people bounce back and forward with the AI until they're happy, but then don't fluff it up and expand it needlessly in the final step, spit out a condensed, tight bullet list, no fluff no purple prose no emojis and send me that instead.
And the full circle begins AI writes a lot of content and we ask another AI to summarize it. It’s like that project where the guy keeps uploading and downloading a video to YouTube until it’s just mess of pixelated frames. I feel like AI written content is similar.
The skepticism in this thread is fair, but I think it misses the more interesting question: what changes about software engineering verification when the author is an AI rather than a human?
When a human writes code, you can reason about intent. When an AI writes it, the cognitive overhead of understanding the output is higher, not lower. This makes formal guarantees at the output level more valuable -- not less. The interesting work in "agentic SE" isn't coordination patterns, it's: how do you specify what correct looks like in a way that's verifiable at generation time?
Most current AI coding tools solve the wrong problem: they help AI write human-readable code. But if the human is primarily reviewing, not writing, the bottleneck shifts to verification, not readability.
It's not that I don't like AI generated text, it's that I'm tired of the whole it's not this it's that style of writing..
AI code is much easier to read than AI text (or book). It's kind of like what people think of the AI generated book cover. That's how I feel about the generic AI writing.
Fun fact, this isn't new. There's an entire discipline that has been doing this for 50 years: machine learning.
Literally the hard part that people deal with in ML is how do you specify the goal of a machine that's just going to blindly make it happen (the other half is how do you optimize that function but that tends to be considerably easier). Like, if I want a good image recognition algorithm what function should I use to compute how good my current approach is. Particularly when I don't have annotations for every output.
We're going to need to extend the methods that we developed in ML for many other fields over to software engineering.
This is way too dense, you need to distill your thesis and interesting ideas down to a small post if you expect people spending time reading a 417 page PDF
You're crazy if you think the target demo of "business leaders" and "thought leaders" aren't going to dump it into their favorite LLM first thing and prompt their way into a summary.
The author appears to be a CS professor. If it is the same person, it is interesting that he chose not to reveal his affiliation, or mention this document on his page.
If you're going to write with AI give me the prompts. Don't produce this strange tone-shifting full-of-fluff mess where most of it is machine filler content. The density of ideas to text is absurdly low.
Let's just be open an honest with one another. If I really want to read something like this my AI can generate it from your prompts just as well as your can.
Why is so much of generated AI art so... literal? The cover art of this PDF literally spells out what the graphics are supposed to represent. The vast majority of AI visuals on LinkedIn are the same way. If this is what's in store as the future of art, at least commercial art -- feels like a huge step backwards if I'm being honest.
And anyway, what's the point of generating a massive tome like this on a topic evolving as fast as agentic software? Sure it will be outdated within months, if not weeks...
I’m guessing because it’s not produced by artists or people who has an eye for art generally.
When it comes to the book, changes are 80% is written by AI. I mean lots of content produced just pure AI, I’m following some AI subreddits and majority of the posts very obviously generate with couple of prompts, they don’t even bother styling while copy pasting. I’m really struggling to read online content recently.
I just noticed that the PDF cover simply says "© <Author>", not the traditional style of author attribution, which usually is just plain "<Author>". I don't know why, I found it interesting...
Could it be since a lot of the data is trained on captions? At least if I'm remembering correctly, that's what they use to create the association between what's seen and what's said.
Partly it's a byproduct of the way that prompting works. Partly it's that the majority of people generating content with AI are not skilled at conceptualizing imagery in the way creative professionals are. I think it's moreso the latter.
The sweet conference speaking fees, followed by the resultin' consultin'.
The answer is - Grift!
I skimmed this; it isn't terrible content (even though many parts are clearly AI written).
But I don't understand the purpose of this book. Is it an educational material, a speculative fiction, or an essay trying to convince the reader of something?
Because if you wanted any of these things, you could literally skip the book and go straight to the AI that will give it to you, tailored for your project.
This isn't a criticism, more a philosophical question I'm asking myself after 25 years of coding.
Why do people read magazines about <x thing they can do> when they can be doing that thing?
That's a great answer. I hadn't considered that this could be considered casual literary entertainment.
It is certainly an onslaught of tokens.
Book ends with "There’s a fundamental truth about this transformation that no book can fully convey: work- ing with AI teammates is something you must experience to understand"
The purpose is grift!
For writing with an LLM you really have to be precise and honest with your AI usage and I don't think the disclosure here does a good job at that. If you sample a few random pages there are huge shifts in style and tone.
The future where your AI expands your sentence into a few paragraphs that my AI distills down into a sentence sucks just send me your rough draft
I'd rather people bounce back and forward with the AI until they're happy, but then don't fluff it up and expand it needlessly in the final step, spit out a condensed, tight bullet list, no fluff no purple prose no emojis and send me that instead.
And the full circle begins AI writes a lot of content and we ask another AI to summarize it. It’s like that project where the guy keeps uploading and downloading a video to YouTube until it’s just mess of pixelated frames. I feel like AI written content is similar.
This was done in the analogue age https://en.wikipedia.org/wiki/I_Am_Sitting_in_a_Room
The skepticism in this thread is fair, but I think it misses the more interesting question: what changes about software engineering verification when the author is an AI rather than a human?
When a human writes code, you can reason about intent. When an AI writes it, the cognitive overhead of understanding the output is higher, not lower. This makes formal guarantees at the output level more valuable -- not less. The interesting work in "agentic SE" isn't coordination patterns, it's: how do you specify what correct looks like in a way that's verifiable at generation time?
Most current AI coding tools solve the wrong problem: they help AI write human-readable code. But if the human is primarily reviewing, not writing, the bottleneck shifts to verification, not readability.
It's not that I don't like AI generated text, it's that I'm tired of the whole it's not this it's that style of writing..
AI code is much easier to read than AI text (or book). It's kind of like what people think of the AI generated book cover. That's how I feel about the generic AI writing.
Yes!
Fun fact, this isn't new. There's an entire discipline that has been doing this for 50 years: machine learning.
Literally the hard part that people deal with in ML is how do you specify the goal of a machine that's just going to blindly make it happen (the other half is how do you optimize that function but that tends to be considerably easier). Like, if I want a good image recognition algorithm what function should I use to compute how good my current approach is. Particularly when I don't have annotations for every output.
We're going to need to extend the methods that we developed in ML for many other fields over to software engineering.
That first quote by "Prof. Daniel M. German" is made up (hallucinated)?
https://scholar.google.com/citations?user=hpxl9PEAAAAJ&hl=en
The researcher seems to be real, at least? Perhaps the quote has not previously been written down?
This is way too dense, you need to distill your thesis and interesting ideas down to a small post if you expect people spending time reading a 417 page PDF
You're crazy if you think the target demo of "business leaders" and "thought leaders" aren't going to dump it into their favorite LLM first thing and prompt their way into a summary.
So much water and resources being wasted by "thought leaders" posting performative BS on LinkedIn (just count "It is not X, it is Y" style posts).
directionally correct but important to note the water wasted by sustaining the insufferable human is much higher than producing the tokens
I'm not the author, I just got sent the link by someone else :)
417 pages of ai infested text.
The author appears to be a CS professor. If it is the same person, it is interesting that he chose not to reveal his affiliation, or mention this document on his page.
It is him, he mentioned the book on his LinkedIn
https://www.linkedin.com/posts/ahmed-e-hassan_%F0%9D%90%80%F...
Why does this have so many votes? It seems to have so much ai generated everything…
> Unable to display PDF directly.
So :shrug:
Edit: Downloaded the pdf, started reading it. So much slop. I think something of value could be surfaced much earlier.
Morpheus with a full head of hair!
If you're going to write with AI give me the prompts. Don't produce this strange tone-shifting full-of-fluff mess where most of it is machine filler content. The density of ideas to text is absurdly low.
Let's just be open an honest with one another. If I really want to read something like this my AI can generate it from your prompts just as well as your can.