sparck - strategic design consultancy
Creating an employee digital assistant

CONTEXT:

We want to be at the forefront of using AI to meet people’s needs, and we want to practice what we preach, so we’ve set ourselves a challenge- ‘What problems can Artificial Intelligence (AI) solve, within BJSS’.  We started with an open mind and three aims:

1. Improve the employee experience
2. Enhance management insights 
3. Create a new tool we can showcase to our clients!  

APPROACH:

We adopted our Design Thinking approach to understand employee pain points and motivations, generate innovative solutions using AI and build a working coded prototype.

WHAT DID WE DO?

STEP 1 - RESEARCH

AI is all around us!

Our first sprint started in October with user research across our UK and US offices.  We ran workshops (including a virtual workshop for NY), interviews and did a lot of research into machine learning, natural language processing (NLP) and conversational platforms!

People suggested the things that bugged them at work the most: their ‘pain points’. These became our challenges to overcome. Similar items were clustered and formed our challenge categories around recruitment & on-boarding; resourcing and roles; career development, internal communication and finding people.

People were creative, engaged and enthusiastic! They raised relevant needs, but they also came up with some fantastic ideas that helped us shape our AI digital assistant.

We had a number of assumptions at the start of research about the type of challenges people faced, however, what they told us was different. You can never second-guess what research will uncover.

RESEARCH ARTEFACTS

WHAT DID OUR RESEARCH PRODUCE?

After a couple of weeks, we created some key design artefacts, including personas, a detailed pain point canvas, a pain point heat map and key research themes.

Personas 
A persona is a summary of data related to a particular audience group.  It can include pain points, needs and desires, as well as social preferences, motivations and limitations. We create personas so that when we are designing  a service we don't end up designing it for ourselves - but ensure we are designing it for the target customer.

Detailed Pain Point Maps
We explore needs and pain points in detail to understand the challenges, and then create heat maps to identify important themes. In this case, the heatmap quickly highlighted three areas which became our focus for an MVP:

1.     Improve information sharing, insight and knowledge transfer
2.     Broaden and improve communication
3.     Improve networking and people matching

A PAIN POINT HEAT MAP
Interestingly, each office had noticeable differences, illustrating the distinct culture and identity of each location.  The challenges were simpler than expected, although we should have known – most just wanted an assistant for their daily admin! 

STEP 2 - DESIGN

WE THEN MOVED TO RAPID Problem solving

After a successful research phase, we were ready to move into Rapid Design. We started by inviting a group of technologists, BAs and technical architects to an ideation session.  In just a few hours, we had a wealth of innovative ideas that addressed many of our pain points. 

IDEATION WORKSHOPS IN FULL SWING

After merging these ideas with suggestions from our research workshops, we prioritised improving information sharing, insight and knowledge transfer, and began to form the overarching concept.

Principles and data in scope
We had our goal, now to set the scope. It was important for us to create an MVP that staff would be happy with, so we defined two key principles: 

1. It will integrate with the systems we already use - Scheduling, Time Management and Performance Management (accessed both by their APIs and databases).
2. It will involve a chatbot front end – chat services were popular in desk and user research, so it made sense to use a familiar interface.

User Stories & Idea Canvas
Epics and User Stories added depth to our scope before we started prototyping.  We narrowed our MVP features to six key Epics related to projects, people, offices and general company information.

AN IDEA CANVAS

A conversational-based assistant is only as good as the inputs and outputs it has been taught, so we started to shape a series of questions for our digital assistant to answer. Mapping out user journeys and defining likely conversation inputs and outputs was critical to our conversation design, whilst teaching our AI questions & answers informed our conversational platform.

Our company, BJSS, has over a thousand employees so it was important to design a conversational tone that would be appealing to such a diverse audience.

STEP 3 - PROTOTYPE

CHRISTMAS IS USUALLY A QUIET TIME... NOT FOR US!

Leading up to the Christmas break, we were lucky enough to have a team of BAs, UX designers, data scientists, project managers, technical and AWS architects and developers to work on a spike for one week. Following the design principles, the team were tasked with creating a slackbot that employees could use to find out information about projects, colleagues, and other insights about their own current and past assignments. The objective was to build a working prototype in AWS utilising Lambda, AWS Comprehend and Elastic Search.

Data scientists and technical architects worked together to build the chatbot architecture, while BAs and UX designers created process flows of how the conversation and data flow would work.

HOW DID WE MAKE IT?

With just one week, we did some data ingestion and a simple query handler.

We indexed our Project Management system assignment data in AWS Elastic Search, a full-text indexing system that handles document-oriented data. The data reflects the semantics in the PM system; e.g. Person, Client, Project.

We then used an AWS Lambda connected to Slack to handle the user's query. We processed the query with AWS Comprehend to detect key terms, like a company name. We hard-coded the intent of the question to be a "who"-style question; and used the intent and the key terms to construct a query against ElasticSearch.

The ElasticSearch query syntax is not that well documented; it involves quite a lot of trial and error, so allocate time for this spike.

By understanding the intent and the shape of the source documents, we are able to construct a suitable reply -- e.g. when given a Project, we can list the Persons who worked on the project.

This approach can be extended by better detection of different intents, which lead to differently structured queries and differing presentations of the results. By combining multiple data sources, we will be able to handle a wider range of queries, such as matching people's skills to opportunities.

Two key considerations you should evaluate when designing your product: have a good understanding of the data available, and establish confidence in the data quality early on. 
CHRISTMAS? WHAT CHRISTMAS?

HOW DID WE TURN UNSTRUCTURED DATA INTO OUTPUTS FOR USERS?

1. Matching intent - this is when our digital assistant gets robotic! 
The intent matching system takes the users text as a parameter, either pure text from a chat window or speech transformed into text from e.g. Alexa or Google Assistant. Then, the intent matching system uses NLP to process the text and match the users text to a pre-programmed intent that the algorithm has learned to recognise. 

This is done by having a set of pre-programmed intents or tasks with some examples of what the users might write. Based on these examples, the NLP engine takes the users text and matches it with the intent that is most similar (highest probability) to the examples given.

Slack is our platform of choice for day-to-day communication, so this was the obvious choice for an MVP. This text is then handled by our intent matching system.

2. Executing intent function - behind the scenes! 
When the NLP engine has understood what the user has done/ asked, and set fire to an intent, the intent contains instructions for a program that will look up in the BJSS databases. It’s kind of like an in-house Google that does a search through all the information we have, and returns the answer back to the users.

So, when the user asks our AI digital assistant: 'Who works on the 'Romeo project?', the search algorithm will search the database and return the people working on the 'Romeo project' back to the AI tool. Finally, our digital assistant will then reply back to the user with the names of the people working on that project.

In the spirit of design thinking, we must test our small prototype and gather feedback from users on its ability to understand queries. We store all the questions and ratings given by users (per answer) so we can use this information moving forward in order to improve the chatbot, and improve its natural language processing.

NEXT STEPS

BACK TO USERS WE GO... AND ASK FOR FEEDBACK!

After an intense week, the team deployed a working prototype in AWS that returns answers extracted from our data sets. It is worth highlighting that with the approach we took and the timeframe we had, we built a search engine with some elements of machine learning and NLP, however, it is not a AI digital assistant just yet.

Our vision is to develop a system that 'understands' the data available and improves its quality through user feedback. It will transform raw data into useful information that improves in quality and relevance over time. With this, our digital assistant will be able to respond to 'ask' questions, filling in gaps or inconsistencies in its knowledge.

Our team worked really hard to build a working prototype, however, it's not 'Sophia' the robot. Managing user expectations is as important as getting our MVP working.

Watch this space for our next update on our Pilot!