Being Human in the Age of Generative AI

[Featured image by Andy Kelly on Unsplash]

Discussions about generative AI are everywhere right now, and I feel compelled to add my take to the mix. However, I’m hoping to bring you something a little different. Rather than focus on the endless world of possibilities generative AI will create or the jobs it will destroy, I want to step back.

Instead of focusing on the AI itself, I want to focus on the people — that’s us — who are going to have to adjust to stay relevant in the age of generative AI. 

What traits will keep us relevant in the age of generative AI?

Linear Thinking Won’t Work

Initially, I tried to approach this question linearly. I imagined the jobs in the world of AI and tried to build a curriculum that would prepare someone for them. 

I soon realized that kind of thinking is futile. The truth is we have no idea how AI will change things because it will change things in ways we can’t yet predict or imagine. The very nature of its impact means it will disrupt existing patterns and upend predictions. 

Think back to the initial days of high-speed mobile connectivity (4G and 5G networks). It was obvious that sharing large files would soon be a norm. What was less obvious — what few could have predicted — was how entire professions and economies would be built around social media personas. The moment we made content easily shareable, our demand for content increased exponentially in the face of a very obvious limitation: the hours in the day. 

Just as the most obvious applications of increased mobile technology gave way to less obvious and unpredictable changes, generative AI’s true impact won’t be apparent until it has already happened. 

Process of Elimination Offers a Path Forward

If we can’t linearly work through a plan for preparing for a future shaped by generative AI, what can we do instead? 

We can use the process of elimination. By this, I mean that we can analyze the limitations of a world with scaled access to generative AI, identify what will remain valuable, and make decisions focused on using our limited human mind to make a meaningful difference. 

The Human Factor

Use Empathy to Embrace Different Perspectives

[Image author unknown – please contact me for credit/attribution]

When connectivity made its transformational leap, we moved from a world where knowledge was difficult to access to one where it was within reach to almost anyone within a matter of seconds. 

We had to adjust to this new flow of information, and content became the next bottleneck. As a result, huge economies popped up around content creation, and networks made massive investments in streaming. Filling the nearly insatiable drive for content built new opportunities and changed paradigms. 

As generative AI increases in accessibility and capability, content generation will no longer be a problem. However, the widespread availability of AI-generated content will create a new problem: content discovery. 

Already personalization and recommendation engines have been put in place to address the problem of curating information in a way that will capture attention and drive engagement. 

The outcome of these approaches, however, is likely to cause its own set of problems. Put crudely, a recommendation engine works by suggesting things that people who have similar traits to you like. It’s the process of trial and error brought to a massive scale. 

The problem we’ve already seen is that the more effective a recommendation is, the more polarization occurs. People are already likely to fall into echo chambers and see the impact of confirmation bias (where they seek out results that confirm what they already believe). 

In addition, the information that’s deemed the most “valuable” by the algorithms (by getting the most engagement) gets immense exposure in a feedback loop while other content languishes, getting almost no exposure at all. 

These existing problems are about to get a lot more extreme. Right now, we scan pages of Google results to choose an answer from multiple perspectives. With ChatGPT and Bing promising to give us a conversational answer deemed algorithmically to be “the best one,” those other points of view will be flattened. We’re going to be entering a very one-dimensional understanding of the world. 

Without anyone to challenge the answer that’s been deemed the most common on a large scale, we risk losing diverse perspectives and meaningful debate. This will foster groupthink and the mistaken belief that only one good answer exists. 

Luckily, humans already possess the superpower strong enough to take on this problem: empathy. 

Empathy will allow us to consider different perspectives and hold conflicting views in our brain.

A way to stand out in the future will be seeking different perspectives, respectfully and empathetically challenging the general consensus, and fostering a perspective of inclusiveness rather than polarization. 

One way to be the best version of yourself in a world of AI is to intentionally challenge your own point of view and research different perspectives on any given topic.

Apply Ethics On Top of Algorithms 

[Photo by Niklas Ohlrogge on Unsplash]

Algorithms are excellent at optimizing for a given desired outcome. They are not good at predicting or preventing possible unexpected and unwanted consequences that come with radical system optimization. 

Take a look at what happened with social networks and their monetization model based on advertising. In order to keep users engaged on the site (the outcome for which the system was optimized), the content has become increasingly incendiary and polarizing (because that content is very good at keeping us — angrily — engaged). 

Algorithms didn’t set out to make us angry. They simply found a vulnerability in the way we are wired and used it to reach the desired outcome. 

Once again, humans already have access to the tool that addresses these concerns. 

Ethics allow us to accept a suboptimal short-term outcome to foster longevity. 

While many thought experiments in ethics focus on catastrophic outcomes, we can see the importance in even relatively minor examples within marketing.

Marketers cannot focus all their investment only on extended demand capture. They must think about long-term demand generation by growing their brand equity. 

Future success will go to those who can balance short- and long-term goals through an ethical approach. 

Get Abstract to Rethink the Problem

[Image author unknown – please contact me for credit/attribution]

As explored above, AI is great at producing efficient solutions to the given problem. Most of the time, that’s great!

However, we’ve all been in a meeting where we’re spinning our wheels, arguing over and over about a topic without reaching a conclusion. At some point, someone has a moment of clarity. “Wait a second! What are we even arguing about? This doesn’t even matter in the grand scheme of things!” 

It turns out that accurately identifying problems can sometimes be as difficult as solving them. 

One of the most brilliant minds of our century (Albert Einstein) explained it simply: “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.”

Generative AI offers us the opportunity to solve problems more efficiently than ever, but that means it will become more important than ever to frame the problem correctly. 

The human ability to consider abstract concepts provides an opportunity to always refer to the macro context, and question if the problem we are trying to solve is framed in the correct way. 

History and academy has taught us over and over again that by reframing the problem we can drive true innovation. 

Get Ambitious and Do Your Homework (Plus Some Extra Credit!)

[Photo by Aaron Burden on Unsplash]

One likely outcome of widespread generative AI is the way it will remove the barrier of iteration. A simple yet very practical example is the diffusion of digital testing tools that allow practitioners to set up tests very rapidly and get lost in tweaking thousands of elements and making only marginal progress. 

Why is this a problem? In business (as in life) we should never confuse motion with progress. Throwing things to the wall to “see what sticks” can feel like moving forward without actually doing much work. 

Being able to create content effortlessly has the risk of creating dangerous generalists who are careless about the opportunity costs or unintentional damage done. 

We need to be more disciplined . . . and more ambitious. 

AI is going to save time on content creation, and we should use that time to become more proficient in the critical elements of our jobs. 

For example, those in charge of conduct testing could get certified on proper test design and avoid design bias by isolating all the possible variables; explain how their work ladders up to the communication strategy; and study the rules that govern consumer attention, the building of memory structures, and a whole slew of things that they never contemplated before.

As we increase our productivity through technology, our ambition pushes us to generate a greater outcome rather than take the time back. It may be a questionable choice, but one we have done for millennia. 

We have the opportunity to reclaim that time and use it to learn new skills, be more methodical in our approach, and find ways to be more effective. 

Address Emotions to Drive Alignment


[Photo by Tengyart on Unsplash]

One critical way of gaining impact beyond expanding our skills and knowledge is to make a concerted effort to foster trust amongst colleagues and drive greater alignment.

In the corporate world, much of the challenge professionals face  is not related to finding the solution to a problem but managing the “human factor” in the work environment. 

While there are legitimately complex problems that AI will help solve, such as writing code or predicting the shape of proteins, a lot of work is spent aligning internal stakeholders to a plan or goal. 

Think of all the PowerPoint presentations or Excel analyses you have put together in the past year.  Why did you take the time to make them?  It was probably to convince someone of something. To some respect, generative AI won’t be different from the Microsoft software suite: they speed up and enhance the work you once did manually but don’t eliminate  the final task of driving alignment across colleagues and key stakeholders. In other words, they don’t handle the emotional element of getting people to understand. 

Patrick Lencioni argues in the book “5 Dysfunctions of a Team” that “If you could get all the people in an organization rowing in the same direction, you could dominate any industry, in any market, against any competition, at any time.” The foundational element to achieve alignment is to establish trust among team members by giving them the space to be vulnerable with one another, being comfortable in seeking help or admitting mistakes without being defensive. The author also argues that conflict is not something to avoid but something to be embraced and managed from a place of trust and a shared desire for collective progress. 

In my career, I had to drive a lot of change management for different organizations. Although I’m always learning, I found out that there are four key steps to drive effective change:

  1. Share a common goal and explain why the desired change will benefit the people you are talking to.
  2. Interrogate yourself on what drives pushback: oftentimes, it’s either an organizational issue (e.g., misaligned incentives or a critical factor not previously considered) or an emotional response (e.g., fear that the change may threaten their role within the organization).
  3. Address the underlying driver and find a way forward that allows for a win-win outcome (as explained by Stuart Diamond in Getting More).
  4. Establish a process and incentive system that reinforces the new ways of working. 

Each of these steps is underpinned by the foundational element: emotional responses. Emotions are a surprisingly common driver of people’s behavior. So while generative AI will make us more productive and help us solve complex problems, it will be on us to find ways to work well together, establish trust, and constructively and respectfully share different opinions and points of view. 

We’ll have to lean on our human traits of empathy, ethics, abstraction, and ambition to drive alignment, and therefore, progress. 

What’s Next? 

To be clear, I’m not against the use of generative AI, and this post is not meant to be an ode to human imperfection. Amara’s Law perfectly applies to generative AI: “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” 

In order to responsibly wield the power of this new technology, we need to be conscious of its limitations and become the ones to promote proper use to fill the gaps. 

Generative AI will bring remarkable increases in content output and iterative potential transforming our future, but we must remember where our own remarkable abilities continue to shine.

Author: Paolo

Economist by education, marketer by profession, coffee roaster by hobby.