Bottom-up standardization

For the past year my team at Ford has been working on a project that involves us visiting the company’s various manufacturing plants across North America, where we’ve been conducting interviews and contextual inquiries with workers. Although we are trying to understand a technology problem that is common to basically all the sites we’ve visited, each one has its own needs, culture, and way of doing things, and each is affected by this technology problem in slightly different ways.

Once our research is turned into a design and that design has been piloted, my team knows there will be some pressure to standardize it across all sites. Meanwhile the various sites each have their own idea of what an ideal solution to this technology problem should be.

Many large organizations I’ve worked for have experienced some version of this same scenario, and it has me thinking about how the success of standardizing any solution depends not only on the solution itself, but on how the process of standardization is approached.

There are advantages and disadvantages to both centralized and localized solutions, but they are typically framed as being in opposition. How can companies break out of this stalemate?

I decided to reframe the situation as something other than a “problem”, in a way that will hopefully be more illuminating. And I took a lesson from several instances where standardization has been achieved with more success.

Understanding a situation as a problem sets up an expectation that you can “solve” it once and for all. If we take our pet solution and try to justify it by overcoming objections, natural human biases will inevitably take over. This is how we end up with the familiar story of leadership ramming an unwanted solution down employees’ throats, or employees avoiding or even sabotaging an unwanted solution rolled out to them by leadership.

It is necessary to deeply understand both sides of the situation. One of the more stimulating things about UX research is getting to hear from different sets of stakeholders, which provides a unique vantage point that I’d liken to stereoscopy. When different sets of stakeholders believe they are in conflict with each other, it can even be like seeing in an extra dimension, almost like cubism.

Calvin and Hobbes goes cubist when Calvin's dad convinced him to see both sides of an issue.
Bill Watterson masterfully illustrates what happens when Calvin sees both sides of an issue.

In my own words, standardization is when you take something that provides benefits in one context and put that same thing into lots of different contexts, with the aim of reaping not only those same benefits over and over again, but additional benefits that come just from having created uniformity.

But it doesn’t always work out this way. Sometimes the other contexts are a poor fit, and often the end users push back, sometimes rightly so. I’d bet everyone can think of some examples of times when attempts to standardize were justifiably unwelcome.

So, what are some strong arguments for and against both standardized or “centralized” solutions and customized or “local” solutions? Below are some I’ve heard over the years:

A table listing pros and cons of both centralized and localized solutions.
I tried to organize this table so pros and cons on the same line mostly corresponded with each other, but really they are in no particular order.

By the way, there is scholarly research backing up some of the items in this table. For example, Farrell et al. published a paper in the 80s in the Rand Journal of Economics about how one of the risks of standardization is it can lock you into an inferior product; their paper has since been cited thousands of times, indicating the enduring salience of that insight.

A side benefit of laying out pros and cons like this is it prompts us to think about the practical and emotional impact of the change so it can be more effectively communicated about when the time comes. And importantly, this exercise decouples our own emotions from whatever solution we ultimately pursue, which helps to make subsequent decisions less prone to biases and blind spots.

With these arguments laid out, it is clear there is no neat easy answer that would please everyone and resolve all the misalignment. It is also clear we are really talking about trade-offs. Whatever the solution, it will need to manage these trade-offs.

Internalizing all this allowed it to become a filter as I absorbed other ideas and examples. It didn’t take long before I stumbled across two that were revelatory.

The first example was from US legal codes. Our system of federalism defines certain responsibilities as those of the Federal government, and leaves others to the states. However, states sometimes have an interest in working from the same set of uniform laws as other states, but in areas outside the proper jurisdiction of the Federal government.

When this happens, I learned, what gets created is called a Uniform Act. These are written collaboratively among different states. There are more than a hundred of them, typically concerning matters related to commerce but sometimes other things like child custody, land use, foreign judgments, controlled substances, and so on. 

The way it works is various state governors appoint members to a body called the Uniform Law Commission, who drafts a copy of whatever act the states will be asked to sign on to. The Commission has no direct legislative power; instead, the acts it drafts only become state laws as each state legislature signs onto them. 

This caught my eye because we are used to hearing about standardization as a top-down thing, but this is essentially a form of bottom-up standardization: using a system of representative government, the people who would be the subjects of standardization get together and decide what it is they’re standardizing to, and then agree to it before it is implemented. And they agree to it because they buy into the idea and think it will work for them. 

How could something like that work for various local sites within large companies? What if there was a technology solution that various sites could opt into? It would require some marketing, in the sense of making sure all the sites knew the system was available, what it entailed, what its benefits were, and how to onboard it, but sites that did make an informed decision to opt in would presumably reap those benefits, without feeling like the solution was foisted on them by ivory tower executives in HQ.

Remember that even an okay system met with enthusiasm by its users is probably going to work better than a perfect system that users feel resentful about.   

The second example was ISO, or the International Organization for Standardization, which oversees the implementation of tens of thousands of technical standards for everything from shipping containers to the light-sensitivity of photographic film. ISO is the reason the nuts you buy at one hardware store match the bolts you might buy at another hardware store—even in another country. Name just about any manufactured product and there’s a good chance there are one or more of these standards in its specifications. ISO standards are at work behind almost every facet of ordinary life and most people don’t realize it.  

But in the last few decades there has been rising controversy over who sets these standards and what they should be. One instance of this involves the increasing share of Chinese manufacturing in the global economy, which has led to a push from China saying they should have a larger seat at the table in developing standards. In an effort to preempt this, national governments worldwide have become more eager to dictate (from the top down) their own standards to whatever manufacturing is within their jurisdiction. 

Advocacy for bottom-up standardization has come from people such as Duff Johnson, Executive Director of the PDF Association, who used that exact term in an article last year in which he said “standards work best for everyone when they are developed openly”, calling for an “organic process”. He recommends that governments engage with industry at the subject matter expert level, creating a win-win in which the government gains expertise, skills, and context, and the industry members can better understand government viewpoints and interests. 

This provides an important perspective on how companies might standardize technology solutions. Executive stakeholders should engage with ground-level employees on the SME level, where each can benefit from exposure to and empathy with the other’s working paradigm. Fortunately, Duff Johnson’s suggestions reflect the way my UX research team is already going about our various projects. 

We have a responsibility to ensure corporate stakeholders understand the SME perspective, and at the same time we have to address the corporate interest in standardization, with its attendant tradeoffs but also undeniable benefits. Our interviews and contextual inquiries are those engagements with subject matter experts, and the insights we collect and synthesize will be shared with corporate stakeholders. So in addition to providing guidelines for the technology designers and developers we work with, this insight about bottom-up standardization represents an opportunity to realize we are forging a bridge between the highest and lowest levels of our company.

Advertisement

Stepping on toes

As a practice, UX can be modeled as having two components: Research and Design. For any UX initiative to be successful, these components have to work together harmoniously. But a user’s experience can also be spoiled when one of these components steps on the toes of the other.

Research and Design are co-iterative as phases; they may overlap, but if UX is being done properly then Research is both the first and last phase — it doesn’t matter how many Design iterations fall in the middle.

Research/Design iterations in UX practice

What I mean by stepping on toes is when a phase that should come later starts too early. Design can step on the toes of Research, for example, by developing prototypes or even finished products that are not informed by Research findings, which is bad on its own but worse if users come into contact with those uninformed artifacts. (Indeed, in UX we often lament instances where this happens, and it happens a lot!) It affects those users’ expectations and perceptions of what the experience should be, and closes doors to what the experience might be.

When Research steps on the toes of Design, you get things like the incident that prompted me to write this blog entry today: I was on a website, entering information into a form. In fact I was typing into a field. In the middle of that task, I was interrupted by a popup asking me for feedback on the website, and I had to take my hands off my keyboard and use my mouse to take further action.

I was interrupted in a task by a popup that looked roughly like this

There are three actions a user might take in this situation:

  1. Give feedback according to how the experience has generally been up to that point
  2. Close the popup, usually without thinking about it
  3. Give feedback that is negatively biased by the recent interruption

I would bet that a tiny fraction of people would respond with Action 1. Personally, I responded with Action 2, as I suspect most people probably would. (When briefly interrupted from a task, it’s normal to simply want to resume the task.) I don’t know how many people will generally go with Action 3, but I would bet it is a much larger proportion than those who would go with Action 1.

The people maintaining the website should have waited until just after users completed the form to ask for feedback. There’s a good chance many people would still take option 2 at that point, but whatever feedback would be collected would not be negatively biased by a recent disruption.

In this case, Research stepped on the toes of Design by asking users for feedback before they could complete the experience as designed. Not only will those users not be able to comment on the whole experience, but when giving feedback they will be biased against whatever came before. This is why it’s important for the two sides of UX not to step on each others’ toes.

If a user interface requires explanation, is it bad?

Some coworkers shared a meme that’s apparently been circulating on LinkedIn:

A USER INTERFACE IS LIKE A JOKE. IF YOU HAVE TO EXPLAIN IT, IT’S NOT THAT GOOD.

The meme is true for some applications, but definitely not for all. For instance, Microsoft Excel features a highly complex interface, but there’s a good reason it’s one of the most utilized business and research tools in the world, and I doubt the interface could realistically be made much simpler while retaining its capabilities.

The meme definitely doesn’t work in reverse: some interfaces require no explanation but are plenty bad regardless. For example: touchscreens are usually considered very intuitive because the user directly manipulates objects on the screen, but they can be difficult for certain people (e.g. those with vision impairment) to interact with in a physical sense rather than a cognitive one. There is an abundance of accessibility cases such as this.

Or, interfaces can require no explanation but be bad because in their simplicity they encourage an unhealthy level of interaction, or any interaction with a product that is unhealthy to begin with; if not for the simple interface, users might interact less with the unhealthy product or avoid it altogether, to their benefit. Think slot machines, social media, etc.

For many of the products I’ve worked on, a balance must be struck between intuitiveness and accommodating power users — those who rapidly learn and integrate into their regular use the non-obvious shortcuts, hidden features, and advanced capabilities of the product. This balance is found through research: talking with stakeholders, listening to and observing users, and learning about the broader context into which the product fits.

It is fun to identify ways a product’s interface can require less explanation — this very activity is what first got me interested in UX, as I suspect is true for many others in my field — but sometimes a simplification that makes a product easier to use for one person makes it less powerful, or in some cases completely unusable, for another.

The elusive “questioning attitude”: What it is, Why it’s important, and How to cultivate it

An internet search for “questioning attitude” (include the quotes if you’re trying this at home) turns up article after article about this worker characteristic. It’s often stated to be a desirable trait in disciplines like nuclear power generation, construction management, and other industries where errors can be extremely costly, even deadly. You want people in those fields who are not satisfied that “everything looks OK” just because it seems so at first glance. This trait is also crucial in the social sciences, including my occupation, UX research.

When a researcher has a questioning attitude it doesn’t just mean “likes asking lots of questions”. Asking questions is fundamental to the job, but a questioning attitude is about being aware of the assumptions one makes and then testing whether those assumptions are true. If they aren’t true, then they get discarded. While designing those tests and interpreting the results one still needs to be constantly on the lookout for unfounded assumptions in a kind of recursive pattern all the way down, ensuring experimental design and the findings generated rest on solid bedrock of what has been established as true (as best as can be ascertained given the constraints).

A questioning attitude doesn’t stop being important just because the immediate risks seem low. Any technology built without a questioning attitude can have dramatic negative effects: an annoying Learning Management System can set people back in their careers; a frustrating payment workflow can cost a company millions of dollars; a facial recognition algorithm that’s relatively blind to people of certain races can cause those people to feel alienated or worse. These problems often happen because the designers of the technology work off of assumptions derived from what is familiar to them, and don’t consider that they might not be representative of their users.

If a questioning attitude means being aware of one’s own assumptions, how does one gain that awareness? Based on reading accounts of people who demonstrate a strong questioning attitude, talking to senior colleagues, and drawing from my own experience, I think this awareness can be cultivated from exposure to unusual and uncomfortable situations. Getting to closely know people who are very different from yourself, living in a far-away place that’s very unlike where you’ve spent most of your life, and learning and performing a wide variety of new skills are some of the ways to expose yourself to these situations.

Part of what this provides is the ability to switch into a “man from Mars” mentality, where you can see things afresh, without value judgment or preconceived notions (similar to what I learned from searching through a hot dumpster for a pair of lost keys). Critically, it also hones a rebellious instinct to look where nobody else around you is looking, to draw connections where nobody else is drawing them, etc. Not every place you look and not every connection you draw will be valid, but without this rebellious instinct important considerations are bound to be forgotten.

Humans naturally (by virtue of genetics and formative development) have different levels of self-awareness and rebelliousness, so to some extent the amount of “questioning attitude” present among UX researchers at a given company could have to do with which researchers that company hires. But I believe it is still a skill that can be maximized for each individual, and should be to produce the best design outcomes.

Stop pathologizing change resistance!

Change Management professionals are fond of pointing out humans’ many cognitive biases, which contribute to people’s supposed resistance to various kinds of change. Reference is also often made to the fact that most categories of human emotion are negative, and that change is threatening to people for a long list of emotional reasons related to things like status, or the feeling of insecurity that comes with having to learn to perform tasks in a new way.

It’s easy to come away from these messages with a picture of change resisters as damaged, fragile victims, who respond to change only with irrational defensive emotions, and who need to be “managed“, “dealt with“, “addressed” (and compared to toddlers!), and “overcome“.

In my career I have listened to countless people within various organizations tell me about workplace changes they resisted. In every single case these accounts centered around specific, often tangible negative impacts and interactions the changes were causing: doctors were forced by a new electronic records system to interact primarily with screens instead of patients; accountants had to do double entry in a new piece of software that was confusing and error-prone; engineers found their new ordering tool required them to enter extra, redundant search information while producing results that were unhelpful and irrelevant.

Without talking to people like this and hearing their stories, one could get the impression they were just being pulled along by their familiarity bias, or that they were simply fearful of the loss of status that the newly implemented systems represented. Their condition, one might think, is unfortunate, but ultimately they need to (in the words of one change leader I overheard) “get over it.”

In reality, people seem to usually resist change for good reasons: the new thing is flawed; the new thing is incomplete; the new thing is not communicated about effectively or truthfully; the new thing is not needed; the new thing is not the right solution; the new thing provides a worse interaction experience than the old thing; no training on the new thing was provided, or it was provided at the wrong time, or the training was of low quality; no support for the new thing was offered; etc.

Furthermore, over my years of interviewing people, everyone I’ve asked about workplace change has expressed some variant of this realistic and positive attitude: “Change is inevitable, and I do my best to adapt to it even if I don’t always like it.” Most people I’ve talked to could name both positive and negative workplace technology changes they’d experienced, as well as both technology changes that were forced on them and ones they undertook of their own will.

Pathologizing change resistance is especially damaging because it gives managers and executives the idea that they ought not to question or challenge the latest trends, lest they be found to be suffering the same pathologies as their Luddite employees. This contributes to a kind of Emperor’s New Clothes problem. In the end it’s everyone — not just the “emperor” — who bears the brunt of the bad decision to adopt the change.

The way to avoid this problem is to stop treating change resisters as obstacles, and instead use them as a front-line resource. Some texts give only the merest lip service to seriously engaging change resisters (for example, the 100-page book “The Eight Constants of Change” devotes exactly one paragraph to it) and even then, it is typically done as an afterthought. That is a backward approach.

The people identified as change resisters are really the ones who have the answers to questions like:

  • “What change does our organization actually need to make?”
  • “What are we doing well and should keep doing?”
  • “What makes this organization a place where people want to work?”
  • “What factors go into a successful change?”

These are the kinds of questions that need to be answered before any significant workplace change is considered, which means the so-called change resisters should be engaged right at the beginning, and their considerations taken seriously.

If nothing else, giving employees the impression they are not heard is a way to ensure that a workplace change will fail.

Interpreting data

I don’t know where else to put this, so here goes:

st-pattys-shirk-21

I came across that image in a link-dump on a blog I sometimes visit; it was borrowed from someone who borrowed it from someone, etc., and I don’t know who to attribute it to, but I love how the animation elegantly explains the problem of interpreting data, agreeing on facts, etc.

As researchers, we need to triangulate our conclusions with other stakeholders to be sure we aren’t focusing on the wrong patterns. I suppose that speaks to the importance of collaboration with team members, asking the right questions, etc.

I am now a certified DACUM facilitator!

Last week I completed a training course at the Ohio State University to become a certified DACUM facilitator. During the week of training, my co-learners often asked me if having observed about a dozen DACUM workshops made the training easier.

A DACUM workshop provides its observers with a thorough introduction to the DACUM process: each workshop begins with a somewhat in-depth orientation, plus the sheer repetitiveness and intensity of the process makes it impossible not to come away with a strong impression of how a DACUM is carried out.

A careful observer can also pick up a lot of what the facilitators are doing “under the hood” to make the workshop successful. In this sense, observing DACUMs helps to make the process non-alien, and imparts at least an “academic” understanding of how they are facilitated. It’s a bit like closely watching how someone rides a bike or drives a stick shift: after enough time you can at least figure out how it’s done and start to mentally practice doing it yourself.

But there is no substitute for the experience of getting up there and actually doing the facilitating with a panel. OSU’s DACUM facilitator training program consists of one day of conventional instruction in which learners are seated before an instructor, then two days in which the learners participate as panelists in a mock DACUM (or “Facum”) with one at a time taking turns as facilitator. The final two days are a sort of capstone session, spent conducting an actual DACUM with real panelists provided by government or industry organizations.

In my case, the panel I facilitated for consisted of employees from American Electric Power, where I work. In fact, their DACUM session was part of the very project I’m on, so I had the double benefit of also advancing my team’s project while I gained my DACUM certification.

Normally 3-4 learners share rotating facilitator duties on the capstone session, but for this one the panel was broken out into four mini-DACUMs consisting of two panelists and one facilitator each. This meant I facilitated a whole workshop by myself.

I can’t think of any better way to train! By time it ended, I was eager to facilitate another. A coworker observing the workshop asked me how it felt to be DACUM-certified. I responded, “Now everything looks like a nail.” My project is slated to include another five or six DACUM workshops before the end of 2019 and I can’t wait to facilitate them.

For a slightly more in-depth explanation of what DACUM is, I’ve written about it before.

What searching through a hot dumpster taught me about UX research

A good user experience researcher is able to suspend judgment in the moment of data collection. I’ve heard it said that a UX researcher ideally has the ability to see with “new eyes” as if completely ignorant, like a visitor from another planet.

That is a difficult skill to acquire, but a few summers ago I found myself in a situation in which I had basically no choice but to practice it. After telling this story to some coworkers they encouraged me to write it down so it could be easily shared, so here it is:

One evening, arriving home late from a weekend trip with my family and unloading the car, I realized my keys were missing. After much fruitless searching I determined by process of elimination that the only place my keys could be was inside of the dumpster where my wife had tossed some trash soon after we’d arrived. We reasoned the keys must have been in her hand as she threw the trash in, and she must have accidentally let go of them in the same motion.

This was back when we lived in a condo complex, and there was a huge 6-yard dumpster, the kind with sliding doors on the sides, where everyone from the condo complex (and occasionally outside trespassers) deposited all kinds of junk. Since “trash day” was coming up the day after next, the dumpster was already very full.

My options were to either forget about the lost keys and cough up about $200 for a new set (one of the keys was to a car we were leasing, so not cheap) as well as risk a bad person finding the keys and gaining access to my home and both cars, or else to crawl into the dumpster and recover the keys.

First I tried a compromise: several 20-30 minute sessions of peering inside the dumpster with a flashlight, poking at objects with a long stick, trying to look under things, hoping I’d see a glimmer of metal and be able to just fish out my keys without compromising my bodily cleanliness.

No such luck. Trash day was looming and I was running out of options. Eventually I decided that a few minutes of discomfort was not worth $200 and lingering paranoia about being burgled or having my car stolen. So the next day I resolved to travel inside the dumpster and leave no stone unturned, as it were.

I went in the daytime when I’d have the most light. I suited up in waterproof fishing boots, elbow-length kitchen gloves, a headlamp, and some old gym clothes I was prepared to throw away immediately after this excursion. I also tied a handkerchief around my face to filter out the taste of the air.

That’s right: taste. Obviously, I was resolved not to smell anything. That was Rule #1: I forswore inhalation through the nose; only oral respiration permitted. One whiff and you’re done, my inner drill sergeant barked. Knowing I’d still have to taste the air inside that dumpster, I decided I’d at least try to filter it a bit.

As it was summer, the air and metal were both hot as I probed for hand- and footholds and hoisted myself up toward one of the open windows of the giant rusted box. One knee in, then one leg in, then both legs in. I was sitting on the ledge, facing the abyss. I leaned back and took a few deep breaths outside, then held the last one in and I slid forward into the stifling darkness.

I was crouching on various kinds of trash. As I slowly let the air out of my lungs and prepared to suck in more through my bandana (which immediately tasted awful), I glanced around and was almost overwhelmed. Everywhere I looked was something horrendously nasty; things that would be unpleasant enough immediately after being thrown away, but which had by now been sweltering in what was essentially a small oven for almost a week. The refuse was haphazardly piled up around me, ready to avalanche, giving way under my feet.

That’s when I discovered Rule #2, the most important of all: judge nothing. The demands of my circumstances dictated that I internalize this rule immediately and fully, so I did. This amounted to nothing less than a new lens that materialized in front of my eyes, a new filter on existence. A whole new way of seeing things.

A leaking bag of party trash was no longer tepid beer and grease-covered empty cans and napkins that had been dragged across the sweaty faces of drunken pizza eaters. Under my left boot was no longer a torn couch cushion with questionable stains and a bewildering backstory. The three inches of opaque wet stuff sloshing around my right foot in the bottom of the dumpster was no longer a mixture of rainwater and bile and fermenting backwash. The taste in my mouth was not a flavor, it was just a pattern of molecules. The sloshing mixture was just a liquid substance. The cushion and trash were just objects.

These were the new categories of my reality: rigid materials (to be moved so as to be looked under), flexible sheet-like materials (to be drawn away or inverted so as to be looked behind or inside of), liquid materials (to be probed either by boot or by hand), and so on.

Really, there were only two kinds of objects in the universe at that moment: Things That Were My Keys, and Things That Were Not My Keys.

That was my insight. I could judge the world the way I normally would, and fail and suffer, or else suspend all judgment and use my senses only to serve the purpose of disproving my research hypothesis (in this case, “my keys are in this dumpster”). My arms and hands and fingers were now scientific instruments, gathering and testing binary data. My mind had been temporarily optimized for lost-key-finding, and importantly, nothing else.

To make a long story a bit shorter, I did not end up finding my keys in that dumpster. Eventually my Dantean tour of the Inferno was cut short when I accidentally inhaled through my nose and barely clambered out in time to avoid throwing up into the handkerchief tied around my face. Later that day I was hopelessly searching again around my car’s tailgate, and the keys dropped into my open hands. Apparently my wife had placed them on the roof of the car while she was on auto-pilot (not her fault; we had a young child at the time and were both sleep-deprived), and during the day’s driving the keys slid down the roof toward the rear of the car.

The dumpster dive would have been a most unpleasant waste of time and effort if it hadn’t taught me such a valuable lesson. Abstaining from value judgments is essential for good research, and cultivating that ability is crucial for a good researcher. There are probably less-disgusting ways to practice that abstention than crawling into a hot dumpster full of your random people’s garbage; I recommend exploring them.

DACUM as user research blitz

When I conduct research with users of internal enterprise systems, a significant portion of my interviews is spent learning about users’ roles, duties, and tasks. This information is critical to understanding the context in which users interact with their technology, and what their goals are when using it.

A few months ago I learned about a systematic process dedicated to uncovering and ordering this information. The process is called DACUM, an acronym for Developing a Curriculum. It exists to support training development, since trainers need to know what duties and tasks comprise the various roles within their organizations so they can develop training content for them, and also identify training gaps. I have been working closely with a training development team, and had the privilege of sitting in on a DACUM workshop. I hope to eventually become certified to moderate them myself.

Whereas interviews can take weeks to plan, administer, and analyze, a DACUM workshop takes two days and generates a concise and efficient set of artifacts listing all the duties and tasks for a given role. I have found that observing a DACUM workshop instills a reasonably confident level of understanding about the role discussed. I would otherwise not expect to attain that level of understanding without conducting and analyzing data from a dozen or more interviews.

A DACUM workshop operates somewhat like a focus group, with a panel of subject matter experts (SMEs) and a certified moderator walking them through a semi-structured discussion. The SMEs all share a particular role or job title in common but may (and ideally do) vary in years of experience, work location, and other factors. Through collaborative brainstorming and analysis between the moderator and the SMEs, the key duties of the SMEs’ role are listed and ordered, and then the same method is applied to the tasks that fall under each duty. Other items such as required tools and common acronyms are also listed. These then become the basis of a set of artifacts to which training development personnel can later refer.

Observing a DACUM workshop is beneficial to me as a UX researcher because it affords – in only two days – an in-depth look at a user role, and a strong basis from which to further investigate existing needs not only in learning and training but also in technology and other systems, potentially shaving weeks off my research effort. This means I can deliver findings and recommendations on tighter deadlines, and dedicate time to other research activities.

More information on DACUM can be found at http://www.dacumohiostate.com