Background

I was brought onto the project in its early phase. The idea was to build a portal to the organization’s learning content. Listening to the core stakeholders, I got an understanding of the problems we were trying to fix. These problems included:

  • Poor user- and admin-experience with the existing primary learning platform
  • Users have to juggle many disparate learning programs;
  • Lack of uniform learning paths across roles;
  • Knowledge gaps and unclear learning paths within roles;
  • Unsatisfactory training experience leading to expensive turnover;
  • The project was also in need of an “early win,” after a series of false starts.

Interviews; card-sort; survey; affinity map

It was obvious this was an ambitious project, so to bring an “early win” within reach I suggested we start by defining and building a minimum viable product (MVP). Although the learning tool was to eventually serve the entire organization, I proposed that we initially focus our efforts on building for a pilot user group: one that had the most need for a better learning tool and was already using something else. To mitigate the risk of biasing the research and design toward the pilot group at the exclusion of others, my participant pool was comprised of half users from the pilot group and half users from other groups in the organization.

I conducted one-on-one interviews with these users to understand:

  • What areas of knowledge are required for success in their roles;
  • Their comfort level with various types of technology;
  • Their learning styles and preferences;
  • Their on-the-job training and performance review processes;
  • How important they consider training as factor in job satisfaction;
  • What were the problems they were experiencing with regard to training and learning.

Each session was followed by a card-sorting activity in which I learned how participants categorized various concepts related to learning, online tools, and career development.

I validated my findings with a follow-up survey and organized the resulting data in an affinity map.

Distilling requirements; information architecture

My research confirmed many of the problems identified by the core stakeholders and uncovered several additional problems. Based on these problems, the requirements of the learning tool became apparent. It would have to:

  • Provide a visually clear learning path whereby users can see what they need to learn to reach a training or career goal.
  • Provide insight into what employees who share a position or skillset are learning to reach a given goal, to bring more uniformity to training regimens within their position or skillset.
  • Accommodate many different kinds of learning content including extramural and offline content, allow users to add content or recommend it be added to the catalog, and make it all searchable by various tags and categories.

From these requirements I devised an information architecture (IA) in the form of a Do-Go map. A Do-Go map combines a site map and a flowchart into a lightweight tool for displaying and testing the components of a system. Each component is represented as a node containing a list of what can be done at that node and where the user can go from that node.

My nodes included:

  • Login. I didn’t know whether the learning tool would be single sign-on so I included this node just in case.
  • Setup. This is where users would enter their learning goals and existing skillsets in the form of a tag cloud.
  • Learning Path. Here, users would be able to view their learning path, which would consist of the learning content they needed to master to reach their learning goals. It might also contain visibility into learning content that others in their role are accessing.
  • Content catalog. All learning content would be searchable here, by keywords, subject, relevant skillsets, etc. To ensure the content was searchable this way, I proposed it be organized in a “Tagxonomy,” a term I coined for the learning tool’s system of content organization that would be based on categories of tags.
  • Content detail. This node would have more particular information about a piece of learning content and provide a way for users to interact with others also accessing the content, with instructors where applicable, to rate the content, and to add it to or drop it from their learning paths. If the content was online they could also access it directly from this node.
  • Add or recommend content. This is where users could directly add material to the content catalog, or suggest that material be added, subject to some quality control process at the admin level. Content would be tagged as it is ingested into the catalog.
  • Supervisor view. Available to supervisors only, this would provide insight into the learning progress of the user’s subordinates for tracking and review purposes, and could contain some admin privileges such as approval.

I tested this IA with users by showing them the Do-Go map and asking them which node or nodes they would visit to complete tasks driven by a set of realistic hypothetical scenarios. I compared their answers with my hypotheses and asked them for their reasoning, either to confirm that their understanding matched the intended design or to learn how their understanding differed from it. Users appreciated the functionality, and some suggested that a few of the nodes be combined so the learning tool would be simpler to navigate.

Communicating design recommendations; advocating for ethics

Initially the plan was that I would draw wireframes and these would inform the development of the learning tool. When the plan changed so that we would now be configuring a learning platform already purchased by the organization, my design deliverables became a set of storyboards and what I termed a “design manifesto.” The storyboards would illustrate a use-case for the learning tool, to guide configuration efforts and communicate the product vision to stakeholders. The design manifesto would list the important design principles to keep in mind during configuration, based on findings from my research. This way we would not lose sight of the problems we were trying to solve even though we were working with a tool that another organization had created.

At various points in the project I stood up for users’ autonomy and privacy. For example, one of the platforms we vetted used social media websites (Facebook, Twitter, etc.) to facilitate peer-to-peer interactions. The demonstration made it seem as though users would have to create accounts for these websites or use existing social media accounts in order to access that functionality. I knew from my research that many users rejected social media, and I also knew that social media websites track users in ways they might not know about or approve of. I would not support the adoption of this platform until I was assured that users would not be forced to sign up for social media to access its peer-to-peer functionality.

Another example came when a vendor demonstrated a product that met the learning path requirement I had identified, but to determine users’ learning goals and skillsets the product would save the users some effort by mining this data from their emails. I objected on behalf of users to this invasion of privacy. The vendor responded that users already agree to have their emails mined in this way when they sign on as employees. I disagreed and maintained my objection since the “non-expectation of privacy” is typically related to performance and security (e.g. to determine if an employee is involved in illegal activity) and not to common job functions (e.g. receiving training), plus there was no strict need to mine learning goal and skillset data from emails when users could simply enter it themselves. In the end we declined to purchase that product.

Advertisements