AI Insulation – How AI Keeps You Apart From Loved Ones

A

Imagine that you’re at a social event and you walk up to shake someone’s hand. Instead of receiving an outstretched hand, you get a hand in multiple layers of gloves and the person is inside what seems like a plastic hamster ball. That’s what it’s like to be represented by layers of generative AI. This is an analogy of what I call “AI Insulation”. When layers of AI wrap around our interactions, making our interactions less raw.

When I am talking about AI, I am specifically talking about generative AI, specifically, not just LLMs that are used for text, but also generative AI that are used to generate images and videos. I also think that AI use has two spheres. One is the sphere of the personal, which involves all the things that you do on your personal time, you know, speaking to your friends, your family, interacting with your kids. And then there is the work sphere, whatever you do for work, between your colleagues, between your vendors, your clients, and so on. 

In the work sphere, anything that gets your work done in a more efficient manner should be encouraged. Actually, let me correct myself, anything that gets your work done in an efficient manner while maintaining the same level of quality of human-only work should be encouraged. Emphasis on maintaining the same level of quality. 

Many people ask, “Is it right to use technologies like ChatGPT? Is it right to use it for work, to write emails, or to prepare reports?” For me, the answer is very obvious, keeping in mind any organizational data use policies. Such as, some organizations may allow the use of generative AI freely while others only allow the use of specific models, like internal models that don’t use your data for future training. And of course, some organizations might say no LLM use at all.

Barring these policies, the main question you should ask yourself is: Does it actually help you? For example, I write pretty decently and efficiently. If I were to use ChatGPT for my emails or chats, it actually slows me down. It’s not because ChatGPT is slow, it’s because the time it takes to write the email myself is quicker compared to explaining to ChatGPT how to write the email. So in these cases, I don’t opt to use ChatGPT because it’s more efficient for me to do it myself.

Efficiency is one. Another is quality. I’ve worked with someone in the past who would dump report preparation work into ChatGPT and just use the output. That’s in principle fine, but only if you double check the output. Is the output good enough as your work? Is it what’s expected? For this person’s work, the answer was no. This quality bar is very important, and you can only tell if the quality is good if you have the expertise to prepare the response yourself. For instance, if you’re writing instructions on solving a technical issue, you could ask ChatGPT, but if you can’t determine whether the instructions are correct, you risk producing a bad report or write up which reflects poorly on yourself and undermines your work quality and ethic.

It’s really obvious what the approach should be, it is literally like working with junior staff. You would give them a task, and they might come back with something after some research, but you would still check it. Over time, as the work becomes more understood and you trust the person, you might delegate more. Similarly, with ChatGPT, you have to develop a sort of rapport with it, understanding what it can and can’t do. Why do we check junior staff’s work but expect ChatGPT to magically know everything? It doesn’t work like that. You need to check its work, ensure the data or whatever is being generated is accurate.

You might ask, “If I have to check everything ChatGPT does, what’s the point?” Well, there are tasks ChatGPT can do without much risk, like checking grammar, drafting basic notification emails, or ensuring content is relevant. These are simple, syntactic things that ChatGPT excels at. But when you’re asking for facts, or even when rewriting content, you need to make sure it didn’t just invent some BS.

Another thing I love using ChatGPT for is formatting content. I once had a list of names from different teams, and I asked ChatGPT to format it into a table, which it did. Then I asked it to replace the team names with departments, and it did that too. It’s something I could do manually, but it’s convenient to just type the request. I’ve also used it for diagramming. I asked it to help design a database schema using MermaidJS syntax, and it generated an ERD diagram based on my requirements. This back-and-forth was fun and sped up the design process.

These are some examples of using ChatGPT for tasks that are not factual in nature but exploratory in nature. As long as you’re not asking it for facts, you can expect better results and accelerated work. By using ChatGPT, you really don’t want to miss out on these advantages. 

As a side discussion, resumes today are mostly vetted by ATS systems, and no one reads them anymore (at the first pass anyway). There are CV generators that use your job history and education data to create your CV, and then companies use AI to read it. So, what’s the point of a CV anymore? I want to explore that in another post on how it might be time for more machine readable representations of a CV. But these are the considerations for generative AI use in the workspace. 

What about the personal sphere that I talked about? This is where I draw the line. When it comes to work, efficiency is key. But in the personal sphere, it’s different. If you use ChatGPT to live your life, that’s your call, but think about the implications. For example, if you’re chatting with your family and using an AI-generated avatar, like in the Apple Vision Pro, where they don’t see your real face, it’s a very accurate rendering, but it’s still not you. It insulates you from your family using AI. It removes one layer of interaction. 

If you’re emailing your family and you use ChatGPT to write it, the words aren’t yours. Then your family might use a summarizer to read it, so they don’t actually read your email either. Now, there are two layers of AI insulation between you and your family. Are you really still communicating with them? Or are you all just interacting with AIs?

In a work context, we don’t care if we’re all just talking to AI as long as it’s efficient and delivers results. But with family, how sad would that be? People don’t think about this enough. You might think, “I don’t use AI.” But what about the filter you applied on TikTok? Or the text you were too lazy to type, so you autocorrected half of it? Was that really what you wanted to say? 

If you use speech-to-text while driving to tell your family something, that’s still your raw text. I think that’s one-to-one enough to not consider it as an AI insulation. But when technology rewrites your words, that’s when the line is crossed. You might clean up photos for work using Photoshop, but would you do that when sending a photo to a loved one? Would you misrepresent yourself that way to someone you just met? AI can misrepresent you, and while it’s fine for entertainment, it becomes a problem when it insulates you from genuine interaction.

What I’m saying may sound alarmist, in a time when social media sites, on a daily basis encourage users to use technologies to create deepfake-like representations of themselves in various “filters” which only serve to further distort healthy body perception in young users. Or what about pre-university age students (K-12, or 6-18 year olds) who are still learning to write yet but will need to decide how they will use generative AI to interact with the world. The episode on South Park “Deep Learning” has a good albeit exaggerated take on this as well.

Writing this post, I wanted to encourage readers to think about how they use AI in their personal lives. Consider whether what you’re doing is helping you stay connected or separating you from your loved ones, wrapped in layers of plastic bubbles so far removed that you may no longer remember what actual personal contact with another human is.

Featured photo by Alexander Grey on Unsplash

About the author

Clarence Cai

Add Comment

Clarence Cai

Get in touch

Reach out if you have something to discuss, you can find me in the following socials.