Salta al contenuto principale


“ChatBot” is bad design


As anyone who has ever seen anything I have created knows: I am not a designer. So take my reasoning and thinking here with a grain of salt. I do have many designers amongst my friends and I do think that a lot of software engineering falls within the dom

As anyone who has ever seen anything I have created knows: I am not a designer. So take my reasoning and thinking here with a grain of salt. I do have many designers amongst my friends and I do think that a lot of software engineering falls within the domain of design to a certain degree. Let me explain.

To me design is the process of exploring, potentially formalizing a problem space and possible solutions for that problem culminating in a solution based on that exploration. Design (to me) is about creating artefacts that do certain specific things for specific target audiences/users. The specific thing can be evoking an emotion or an abstract thought process or just doing a certain task really well. Beauty is a quality of good design because things being pleasant can help with their use, acceptance, but anyone who has ever had to sit on a chair designed mostly to look good knows that beauty only gets you so far.

So using that understanding of design I keep thinking about the current trend of everything being turned into chatbots or “conversational systems”. And I can’t shake the feeling that these paradigms are – for most cases – just bad design.

Broken promises


A chat/conversational interface implies a whole lot of things: We chat with other people, a chat implies a level of social experience, of a shared space for people to meet. Which is great if you want to tell your users “this part of the app is where you talk to people”. But what happens when that “promise” the design made is broken?

We’ve all been forced to interact with chatbots for support. There’s an issue with your phone contract, the underpants they sent you don’t fit or some other thing and you need to talk to someone to sort things out. Enter the chatbot who keeps answering your questions with useless links or other strategies to keep you away from the “expensive” humans to talk to. The company wants to save on support so they try to distract you with a bot which either accidentally helps you find a solution or makes you think “it’s not worth it, I’ll just have to live with my crotch being strangled”. It’s a distraction to avoid investing in you and the relationship. How does that make you feel?

This is not just a support question: Onlyfans recently had issues where people realized that they were not actually chatting to the adult performers they paid for but someone or something else. Now the other side of the chat might have been underpaid staff but the dynamic is the same: Chat interfaces make a promise of social experience and trust that LLM Chatbot can never fulfill. It’s a deception. And good design should not deceive.

Guideless


A good interface guides you to solving the problem you have, good design makes it easy for you to do what the thing is supposed to do. Think of a program you really like to use: It probably shows you the steps you need to take in a structured way asks you the few things it needs from you and then does the thing you want it to do. Because that is what good design does.

What does a chatbot guide you to do? Maybe it asks you a question but for many chatbots it’s just “how can I help you?” or “Ask me anything”. Does that help you use it? Does that structure your path towards solving your problem? The huge number of people whose whole identity has become telling others how to write prompts begs to differ.

I have already argued before that “AI” systems are not tools. Because they don’t contain clear and specific descriptions of problems and corresponding strategies for solving said problem. But let’s pretend that we have a system that can solve a certain problem really well and efficiently: Is a chatbot a better interface than a structured form or UI that let’s you just go through the required steps and then get the result? Chatbots don’t narrow down the path towards a solution, they leave everything open. Which might be great for engagement and keeping people hooked but is that an efficient use of your time?

Outsourcing of work


Let’s talk about your time a bit. I am very protective of mine, I hate it when objects or processes mindlessly waste mine. I do of course waste my time on weird shit, thankyouverymuch, but I want to make that call.

A bad design that forces me to waste time or do a lot of unnecessary busywork is bad because it didn’t do its job: It didn’t make the process easier and more structured for me, it leaves me to do that labor. And I have a similar feeling towards this as I have towards self-checkout terminals at stores: Why do I have to do unpaid work for you and still pay the same? Why should I make it easier for you to employ fewer people getting a worse service while paying the same? That feels dumb. And wrong.

Chatbots don’t make my work easier. Instead of getting a predictable, understandable result based on my needs in a specific situation I get extra work assigned: I need to phrase my query the right way in order to get the machine to lie maybe a bit less. Need to add magic words to the input to stop it from going off the rails. That is my labor I have to put in to make a bad design work. Feels like I am not just doing my job but also the work the operator of the service or product I am having to use through chat should have paid professionals to do. And I’m not getting paid for it.

Like, why should companies get away with refusing to do the work of designing their products in a meaningful way and still get paid?

I want solutions


I do not need the one magic machine that claims to solve all my issues and then makes me jump through conversational hoops to get a mediocre result. That is actually the opposite of what I need.

I want people who know their shit to externalize all they know into tools I can use to benefit off of all that embodied knowledge. And chatbots do not help me with that at all (regardless of the capabilities or lack thereof of LLMs).

I want simple tools that do specific things build by people who were paid fairly and go home on time.


Are “AI” systems really tools?


I was on a panel on “AI” yesterday (was in German so I don’t link it i this post, specifics don’t matter too much) and a phrase came up that stuck with me on my way home (riding a bike is just the best thing for thinking). That phrase was

AI systems are just tools and we need to learn how to use them productively.


And – spoiler alert – I do not think that is true for most of the “AI” systems we see sold these days.

When you ask people to define what a “tool” is they might say something like “a tool is an object that enables or enhances your ability to solve a specific problem”. We think of tools as something augmenting our ability to do stuff. Now that isn’t false, but I think it hides or ignores some of the aspects that make a tool an actual tool. Let me give you an example.

I grew up in a rural area in the north of Germany. Which means there really wasn’t a lot to to TBH. This lead to me being able to open a beer bottle with a huge number of objects: Another bottle, a folding ruler, cutlery, a hammer, a piece of wood, etc. But is the piece of wood a tool or is it more of a makeshift kind of thing that I use tool-like?

Because an actual tool is designed for a certain way of solving a set of problems. Tools materialize not just intent but also knowledge and opinion on how to solve a specific problem, ideas about the people using the tools and their abilities as well as a model of the problem itself and the objects related to it. In that regard you can read a tool like a text.

A screwdriver for example assumes many things: For example about the structural integrity of the things you want to connect to each other and whether you are allowed to create an alteration to the object that will never go away (the hole that the screw creates). It also assumes that you have hands to grab the screwdriver and the strength to create the necessary torque.

I think there is a difference between fully formed tools (like a screwdriver or a program or whatever) and objects that get tool-like usage in a specific case. Sometimes these objects are still proto-tools, tools on their way of solidifying, experiments that try o settle on a model and a solution of the problem. Think a screwdriver where the handle is too narrow so you can’t grab it properly. Other objects are “makeshifts”, objects that could sometimes be used for something but that usage is not intended, not obvious. That’s me using a folding ruler to open a beer bottle (or another drink with a similar cap, but I learned it with beer).

Tools are not just “things you can use in a way”, they are objects that have been designed with great intent for a set of specific problems, objects that through their design make their intended usage obvious and clear (specialized tools might require you to have a set of domain knowledge to have that clarity). In a way tools are a way to transfer knowledge: Knowledge about the problem and the solutions are embedded in the tool through the design of it. Sure I could tell you that you can easily tighten a screw my applying the right torque to it, but that leaves you figuring out how to get that done. The tool contains that. Tools also often explicitly exclude other solutions. They are opinionated (more or less of course).

In the Python community there is a saying: “There should be one – and preferably only one – obvious way to do it.” This is what I mean. The better the tool, the clearer it’s guiding you towards a best practice solution. Which leads me to thinking about “AI”.

When I say “AI” here I am not talking about specialized machine learning models that are intended for a very specific case. Think a visual model that only detects faces in a video feed. I am thinking about “AI” as it is pushed into the market by OpenAI, Anthropic etc.: “AI” is this one solution to everything (eventually).

And here the tool idea falls apart: ChatGPT isn’t designed for anything. Or as Stephen Farrugia argues in this video: AI is presented as a Swiss army knife, “as something tech loves to compare its products to, is something that might be useful in some situations.

This is not a tool. This is not a well-designed artifact that tries to communicate you clear solutions to your actual problems and how to implement them. It’s a playground, a junk shop where you might eventually find something interesting. It’s way less a way to solve problems than a way to keep busy feeling like you are working on a problem while doing something else.

Again, there are neural networks and models that clearly fit into my definition of a tool. But here we are at the distinction of machine learning an “AI” again: Machine learning is written in Python, AI is written in LinkedIn posts and Powerpoint presentations.

Tool making is a social activity. Tools often do not emerge fully formed but go through iterations withing a community, take their final shape through the use by a community of practitioners and their feedback. All tools we use today are deeply social, historical objects that have embedded the knowledge and experiences of hundreds or thousands of people in order to create “progress”, to formalize certain solutions so we can spend our brain capacity on figuring out the next thing or to just create something beautiful or fun. Our predecessors have suffered through proto-tools and all the hurt that comes from using them so we wouldn’t have to. And this social, temporal context is all part of a tool.

And the big “AI” systems that supposedly are “just tools” now do not have any of that. They are a new thing but for most problems they hope that you find ways of using them. They do in a way take away hundreds of years of social learning and experience and leave you alone in front of an empty prompt field.

So no, I do not think that the “AI” systems that big tech wants us to use (and rent from them) are tools. They are makeshifts at best.


Questa voce è stata modificata (3 mesi fa)