note

operational

The inherent selfishness of AI

and why it won't actually solve all our problems

The other week, it was posed to our small design technology team that we should use the Confluence MCP (our documentation at Rocket live in Confluence) to format and/or write our documents. By way of example, my teammate's document was re-written by an LLM Agent using said MCP. Her concise, audience specific writing was replaced (very quickly & efficiently I'm sure) by nicely formatted content that, in the end, obfuscated the entire reason for the document's existence. But that's the rub right? AI-generated output is not about quality or collaboration or alignment. It is about increasing your own ability to produce an artifact at speed. It centers you at the expense of others so that you can presumably go about doing "higher impact" work. It is an inherently self-centered technology. (Which, I guess from that angle, it makes sense that the managerial class and start-up apologists are so enamored with it...)

Anyways, let's take this aforementioned document as an example:

It was produced in response to a small conflict that arose on our newly formed design technology team: that of Pull Request etiquette and best practices. As a new team, this issue came to a head as our not-so-fleshed-out processes came into conflict our growing project load and our misaligned assumptions/expectations around what others on the team find necessary. In essence, this was a people and process problem. My teammate and I had a couple conversations, she wrote a short doc outlining what she was looking for in PR process & etiquette, and we aligned around it. Problem solved! It was a human-derived solution for a people and process problem.

Of course, because live in the era that we live in, what was a people and process solution soon had a techno-optimist layer applied to: enter AI. As soon as the doc was shared out, it seems like the forces that be decided that because we have a Confluence MCP, we don't need to be spending time writing our own docs. We can just have AI write and/or format it. Why spend time on writing a whole document when you can prompt an Agent with a topical outline and have it do everything? So it did. And, as I mentioned above, while it did a great job of producing well-formatted text around PRs, it also obfuscated the point. Where the old doc was concisely written and targeted for our design technology team, the new doc includes a lot of generic fluff around what pull requests and and how they work in Github - not really necessary for folks with our technical experience or expertise. Because of that, one now needs to scroll way down to get to the part that actually solved our previously described people and processes problem.

This isn't to say that my teammates doc was perfect before AI had its say - it wasn't; she was busy and wrote it quickly so there could definitely have been some more formatting and elaboration done - or that this AI generated content was completely useless - it wasn't; there's a new section around when to rely on co-pilot for code review versus when to ask for human review that is genuinely useful for a small, busy, and mostly-internal facing team like ours - but more-so to say that this AI-generated content is largely an anti-pattern when problems (as they often are) are people and process related. AI prioritizes individual output and velocity. This AI-generated document was created by one person. Very quickly. And the output is plausible enough, sensible even, at first read-through. But in creating in this way - outsourcing the deeper thought and small decisions of creation - removes you from the unknown unknowns that challenge you and ask questions of you. It removes you from seeking knowledge from others when gaps in your knowledge appear or asking questions of yourself as to the necessity of or audience for each small design decision. You seem to be able to operate without that help, those other people, to produce competently. And, in this case, you get a doc that has the necessary information in it but where the audience is misaligned and the purpose of the doc is obfuscated. Is that really better?

And thus, the AI-generation process is self-centered in those ways: it prioritizes of one's own output and one's own output creation experience (think DX) over the experience those consuming that output. Previous ("old") processes enforce checks on this self-centered-ness by making you think and ask questions. Or by having others question you. When you need to work through the whole solution without AI, come up with the content or artifact wholly on your own, you are confronted with all the little details that you do not know - whether that is by way of expertise gaps or by lacking in understanding of what others need. Not knowing and need for others can be uncomfortable though. It is friction. And it can be quite time consuming. You need to ask questions and wait for responses. If you're not secure in yourself, you may wonder if you're asking dumb questions. If you're not sure of the process (or the process is ill-defined), you may feel uncomfortable and wonder if you're asking the wrong people. But that is also how you grow and also how you build better solutions/processes. We lose this when we work alone with AI. To me, that's not worth the increases in speed and throughput.



Back to the garden