Hi there! I am your virtual assistant.

Design The Legal Self Assessment and Learning Tool 2.0

Background

Small non-profit organizations are chronically short on time and resources – which makes finding time to review legal compliance challenging. Due to the lack of accessible resources to view general compliance rules in one location, many of these organizations do not realize they are missing critical information.
Without knowing their compliance blind spots, non-profit organizations could
be putting themselves at risk.

Client

Pacific Legal Education and Outreach (PLEO) Society is a BC based non profit that provides accessible legal support for non profit organizations.

Our solution

An engaging, web-based legal compliance self-assessment that empowers non-profits to assess risk, address vulnerable areas, and easily share findings.

My Role

Product Management | UX & UI design | Illustration

Scope and constraints

For Phase 1, our team developed POC – the first module of the legal self-assessment tool: Privacy & Confidentiality. After completing this module, users will understand what they need to do to comply with non-profit organizations’ privacy and confidentiality policies.
The COVID-19 lockdown influenced the last month of the project. Final user tests were conducted remotely.

Research

1. LSALT 1.0

Based on the first iteration of the tool, which was just a simple SurveyMonkey questionnaire, the guiding goal for this iteration was to create an assessment that is quick, engaging, and personalized.

2. Current non-profit legal compliance self-assessments

Types of currently available online legal information:

Legal education and resource sites

Often dense, text-heavy, and fragmented.

Multimedia lectures

Time-consuming, not always relevant, doesn’t always include take-home resources to review.

Online communities and forums

Hard to verify credibility or access the resources needed to answer specific questions.

3. User interviews

The team interviewed non profit users and subject matter experts to gain a better understanding of their needs.

Main takeaways:

Everyone trusts people more than the internet

Be positive & compassionate

Results need to be prioritized

Design Sprint

To expand our idea generation and create a full sketched prototype, I’ve facilitated Design Sprint workshop which provided grounds for our first design iteration.

The first iteration of the tool was a collaborative effort developed from the sketches of the Design Sprint and was a/b tested alongside a prototype of an Interactive Wizard. The Chatbot prototype was preferred by the users.

Once the approach was decided, we started testing and iterating and testing and iterating, resulting in many designs and potential features. We finally landed on a simple, and minimalistic UI preferred by the users and aligned with the design guides of the client's brand.

Content Structure and Emotional design

We designed for empathy built upon the flowcharts we got from the client expanded into a full module script. The script included introduction statements, single choice questions, multiple-choice questions, various responses, to-do list items, and supporting information repeated on the results page.

Humans are used to contextual communication. We are innately wired to have conversations. It’s subconscious behaviour that we want to embrace in our tool. We are not trying to create an illusion of conversation with human and overpromise. We are transparent with the users that the experience they are going through is an interactive survey that has been designed conversationally.
Despite all that, bot’s personality is is our primary focus

We wrote a script for empathy and tried to humanize our chat experience.

"Emotional design can transform functional products into memorable and enduring experiences."
(Kallol, 2020)

Since we are designing semi-conversational UI and we aren’t going to implement native language processing in the timeline given for this project, we had to define both sides of the conversation: the user and the bot.

The users’ script seems to be a natural part. As the goal is to make the experience simple, quick, and effortless in a project timeline, the user should choose from “Yes” or “No” answers. The only additional thing they might use in the response is asking for term clarification. Clarifying the request of the legal term would appear as one of the users’ answers if the question above included complicated legal terminology. The language was simplified so that none of the items would include more than one legal term.

What about the bot? How do we define its personality? Is it a friendly peer or a super professional lawyer? Is it focused on advancing the conversation, or does it stop to provide feedback?

Aiming to create a delightful experience, we aim to cultivate appropriate and positive emotions during the survey chat. How can we do that?
1. We want to evoke positive emotional reactions by easing the tension with encouraging feedback.

2. Our bot should talk to the users in relatable and reliable voice and tone. It wouldn’t be judgemental. We want it to express emotions, empathy, and encouragement.

3. We use illustrations and animations that our users can relate to – Expressive Imagery. These visuals can demonstrate emotion and help users empathize. These additions to the verbal script, as we hope, should create a positive surprise for the user and allow for some breathing space.

4. Humour — Laugh and glee the words or phrases, sentences are powerful positive emotions that alleviate fear and uncertainty while evoking a sense of joy. Our first instinct was to add humour to the experience. However, during the user testing process, we learned that we should be highly sensitive regarding this. Our users feel vulnerable and frustrated when dealing with legal compliance. We don’t want to overload them with jokes.

5. We are carefully implementing micro-interactions as a part of feedback indicators, and affordances so that the interface feels more interactive and fun and minimize the wording as much as possible.

As the bot’s persona main attributes were defined as simple, smart and understanding, and the voice as concise, positive, and efficient; the visual had to support it. We translated it into design principles: clear, simple, efficient and friendly.

While testing the personality approach to the script, we asked the users how they imagined the character they were interacting with. Most of the users found the voice read by the personality with a sex opposite to their own to be more intuitive or efficient. A majority of users pictured themselves chatting with a personality wearing a formal outfit.

After several iterations, I designed a gender-neutral version – Lia which was used in the final prototype.

We developed simple animated reactions to implement in the future for quick, non-verbal animated feedback and to replace some of the verbal feedback currently used in the script. We assume that this will make the tool more engaging and shorten the experience.

Final Product

By providing pre-determined extra information content, the user can select how much information they are given before answering the question. This reduces the amount of text content required and gives less experienced users the opportunity to gather more information. The content was purposely short to only provide the information required to answer the question. When multiple concepts need to be described to answer the question, we included this content as an Introduction Statement before the question appears.

We included a scroll animation to help the user understand the navigation through the chat messages. This also provides pacing for the survey to keep a moderate speed while avoiding overwhelming the user.

Future iterations could include:
▪ Animating the characters reactions to the users’ input
▪ Including stickers and icons as a reaction. This could include an icon or animation to show that an item was added to the list

To-do items are a feature that allows the user to take away actionable items from the self assessment based on their answers. While we recognize it is difficult to retain large amounts of new information while going through an assessment, this feature helps users remember key information and feel confident that they are not missing important learnings from the assessment. To-do items generate on the go, giving users instant feedback about their answers, and once the assessment is complete, users can save and review these items for future reference.

The results page allows the user to toggle between the consolidated list of To-Do Items or to review extra information about specific questions under each module. This page also indicates the answer that the user selected during the self assessment.

When selecting the “More Info” buttons beside the To-Do Items, the user is directed to the question that corresponds with that item. The extra content stated here aims to help users complete their To-Do Items. For example, if the item asks the user to add a statement to their Privacy Policy, sample text could be included with the corresponding question.

Final User Testing Outcomes

A final usability test was conducted remotely due to COVID-19.

All participants navigated the landing page and self assessment successfully and intuitively. All participants understood the function of the To-Do list, and multiple users described the user experience as “foolproof” or “easy to understand”.

100%

Are likely to take the full assessment

83%

Reported an increase of knowledge of Privacy

100%

Are likely to recommend the tool to other organizations

Future iterations

This project is a successful POC of the self-assessment tool.
Next, the full content will be developed, this conversational experience will become a part of Legal Portal for Arts non-profit organizations.