Generative AI and Ethics: Handling Sensitive Conversations with a Digital Assistant

Posted: 07/11/2024

John Gamble headshot

Generative AI is a powerful technology that can create realistic and engaging content, such as text, images, audio, and video. It can also be used to design and deploy ‘digital assistants’, such as chatbots, voice assistants, and virtual agents that can interact with humans in natural language. Emphasising ethical AI helps maintain integrity and respect in these interactions.

At C5 Alliance, we are proud to offer our HR digital assistant Vega, as a solution for streamlining and enhancing the employee experience. Vega can answer common questions, provide guidance and offer feedback on various HR topics such as benefits, policies, performance and career development.

However, as with any technology, generative AI also poses some ethical challenges and risks. One of the questions that we often get from our clients is: What do we do if the digital assistant has any concerns about the wellbeing of the individual? What happens if the conversation becomes personal or a straightforward conversation about feeling ill descends into something more serious about mental health? Is there a duty to flag this to a human or raise an alert? A human certainly would, so should a digital assistant?

In this article, we will explore these questions and share some of our perspectives on how organisations could handle sensitive conversations with a digital assistant while respecting the privacy, autonomy, and dignity of the employees. This is a complex area to navigate and we’re certainly not saying we’ve got this 100% right, but these are our current thoughts on this evolving topic.

How should a digital assistant detect and respond to sensitive conversations?

One of the key features of generative AI is its ability to learn from data and generate relevant and coherent responses based on the context and the user’s intent. However, in an HR context, this also means that the digital assistant may encounter situations where the user expresses negative emotions, more personal issues or genuine distress.

Alternatively, an employee may ask a digital assistant for feedback on their performance, and then express dissatisfaction, frustration, or anger with their job or line manager to the digital assistant.

What happens then? How should a digital assistant handle these scenarios? Should it ignore them, redirect them, or escalate them to a human resource?

At C5 Alliance, we believe that the best approach would be a balance of the following principles:

Empathy

The digital assistant should acknowledge the user’s emotions and show compassion and support.

Accuracy

The digital assistant should provide accurate and consistent information and avoid giving misleading or incorrect answers.

Privacy

The digital assistant should respect the user’s privacy and confidentiality and not disclose or record any sensitive or personal information without the user’s consent.

Autonomy

The digital assistant should respect the user’s autonomy and choice and not coerce or manipulate them into taking any action or decision.

Safety

The digital assistant should protect the user’s safety and wellbeing and alert human support if there is a risk of harm or personal danger.

Based on these principles, we have designed Vega our HR digital assistant to detect and respond to sensitive conversations in the following ways:

  • If the user expresses a negative emotion such as sadness, anger or fear, Vega will respond with empathy and offer some positive affirmation or encouragement. For example, “I’m sorry to hear that you are feeling sad. You are not alone. You are doing a great job.”
  • If the user shares a personal issue, such as a health problem, a family matter, or a financial difficulty, Vega will respond with empathy and offer some general advice or resources. For example, “I’m sorry to hear that you are going through a tough time. You may want to talk to someone who can help you. Here are some contacts for our employee assistance program.”
  • If the user asks a question that is outside of Vega’s domain or scope, Vega will respond with honesty and direct the user to a more appropriate source. For example, “I’m sorry, I don’t have the answer to that question. You may want to contact your manager or HR representative for more information.”
  • We have not engineered Vega to respond in a specific manner to a particular question, but we have used prompt engineering to require Vega to provide a citation of where the factual information in its response has been provided from.

Ensuring ethical and responsible use of generative AI

While we believe that our approach to handling sensitive conversations with Vega is ethical and responsible, we also recognise that generative AI is not an infallible technology – this is also a very fast-moving area of technology and governance. There may be cases where Vega makes a mistake, misinterprets the user’s intent, or generates an inappropriate or harmful response.

With this scenario in mind, we also take the following measures to ensure the ethical and responsible use of generative AI in our HR Digital Assistant:

  • We monitor and evaluate Vega’s performance and quality on a regular basis and update and improve the data, models, and algorithms accordingly.
  • We ensure users are trained on how to use Generative AI.
  • We conduct thorough testing and validation of Vega’s responses and outputs before deploying them to the users and ensure that they meet our standards and expectations.
  • We continue to proactively manage the data (policies, procedures, systems) that Vega has access to.
  • We provide clear and transparent disclosure and consent to the users about the nature, purpose, and limitations of Vega and how their data and information will be used and protected.
  • We enable the users to provide feedback and report any issues or concerns with Vega and address them promptly and effectively.
  • And finally, we of course adhere to the relevant laws, regulations, and ethical guidelines that govern the use of generative AI and digital assistants in the HR domain and the workplace.

Conclusion

For the HR domain, Generative AI is a game-changing technology that can revolutionise the employee experience. However, it also comes with ethical challenges and risks, especially when it comes to handling sensitive or personal conversations with a Digital Assistant.

At C5 Alliance we are committed to using generative AI in a way that is ethical, responsible, and beneficial for our clients and their employees. We have designed our HR Digital Assistant, Vega, to detect and respond to sensitive conversations with empathy, accuracy, privacy, autonomy and safety. We also take various measures to ensure the ethical and responsible use of generative AI in our HR Digital Assistant.

We believe there are two critical factors to ensure the above can be maintained:

  1. Transparency – users must be fully informed not only at launch but throughout the lifecycle of the digital assistant.
  2. Data Quality and Governance – you must know what data your virtual assistants have access to and have confidence that it is accurate and up to date.

For more information about how we can support your organisation with data and AI solutions, email us at [email protected]

Share this
Top