Cognitive Journaling [Ragnarson]

Continuing our recent arc on feedback/self-reflection, this piece by Richard Ragnarson does a great job introducing in detail one highly-effective self-reflection practice: 

Cognitive Journaling  

Journaling is making a comeback these days alongside specific journaling techniques and (obviously) customized journaling products. While some journaling techniques aim to be more forward-looking (aka “planning”) others are more reflective. The good news is that the hype is leading to more innovation in the space and making effective techniques more accessible. While the obvious downside is the increased difficulties separating signal from noise: techniques that are truly effective from ones that are merely popular. 

Ragnarson’s technique sits on a very solid evidence-based basis in the form of Cognitive Behavioral Therapy (or CBT for short). His article is an extensive primer on the technique, walking the reader through the motivation behind the technique, the model of the mind on which it is based, key constructs, high-level principles, a step-by-step guide to the process, a practice program for developing habit and mastery, ways to measure progress and last but not least — an FAQ and troubleshooting guide. Incredible work just putting all of this together. 

In this post, I will only cover the high-level principles and the technique itself. If it resonates, reading the whole post by Ragnarson is highly encouraged. 

Principles

  • Falsifiability — describe internal and external facts. Facts can be falsifiable with a yes/no question on whether it happened or not. Falsifiable: “I only have two hours per day to work on my project”. Not-falsifiable: “I have no time to work on my project.”
  • Nonjudgment — describe events, thoughts and feelings, avoiding inferences/deductions regarding their possible causes. Nonjudgment: “I feel demotivated”. Judgment: “Feeling demotivated is bad.”
  • Detail — describe contexts, events, thoughts, emotions, and behaviors with as much detail as possible, while being mindful of not violating the first two principles. 

Putting it all together — journaling while following the principles: 

I went to the supermarket. I met my boss Chris by chance. We spoke and he brought up my work. I thought, “Why can’t he leave me alone even when I am not at work?” I felt annoyed. I thought, “I don’t like feeling like this.” I felt angry. I thought, “I can’t stand getting annoyed anymore,” and then I thought, “I need to change jobs.”

Journaling while not following the principles: 

I was out and met Chris; he’s such a jerk. I can’t stand dealing with him. I need to quit this job.”

The ABC Process

ABC refers to a model of cognition based on the view that any life experience is constituted of a series of activating events, beliefs, and consequences (ABCs): Activating event → Beliefs → Consequences (emotions + behaviors).

The journaling process, however, follows a different sequence: 

  1. Start with the C (consequences): emotions and behaviors: writing down the emotion or behavior that you want to reflect upon, in the form of “I felt [insert emotion]” or “I did/behaved [insert behavior], applying the three principles (falsifiability, nonjudgment, and detail).
  2. Describe the A (activating event): Describe the situation you were in when you experienced the consequence from before, in the form of “This [insert event] happened” or “The situation was [insert situation or place], applying the three principles.
  3. Find out the Bs (beliefs): With the consequence and activating event at hand, try to remember the thought that you entertained in your reaction. Express it in the form of “I thought that [insert belief]”, applying the three principles. 
  4. Challenge the Bs (beliefs): You challenge a belief by evaluating its validity, doubting it, and finding a better alternative. Consider its flexibility, logic, congruence, and usefulness. 
  5. Write down good alternative Bs (beliefs): Ask yourself: Which alternative thought can I think? Which alternative thought is logical, reality-based, flexible, and useful in pursuing my goals and feeling good? The following table, also created by Ragnarson, illustrates this distinction well: 
Source: Ragnarson

In sum

Ragnarson did an incredible job putting together a comprehensive and detailed guide for cognitive journaling, addressing many of the nuanced points needed to start building this powerful self-reflection having and strengthening our self-reflection muscle. 

Well worth a read!

Advertisements
Cognitive Journaling [Ragnarson]

Fostering responsible action by peers and bystanders [Rowe]

I was hoping to write the post about the feedback ↔ self-reflection polarity this week, but upon attempting to do so, realized that it needs a bit more percolation time. So instead, I’m picking something slightly less cognitively taxing (for me). 

Still connected to the meta-theme of the previous post around diffusing the monolithic single-hierarchy org structure, I strongly believe that key behaviors that are typically attributed to leaders (at the top-tiers of the monolithic hierarchy) are in fact, basic acts of “good corporate citizenship” and we’ll be better seeing these behaviors democratized/spread out throughout our professional working community. While specialization/division-of-labor is essential to any large-scale collaboration effort, the purist form in which it is typically practiced has some painful drawbacks (more on that here). 

This rather abstract preamble is just meant to set the context that existed in my head when I encountered Mary Rowe’s work, and specifically: 

Fostering Responsible Action with Respect to Unacceptable Behavior: Systemic Options to Assist Peers and Bystanders 

Because it’s a concrete example of the more abstract point I was making above. Conflict resolution and dealing with violation of the group’s laws, norms and code of conduct is often viewed as the job of HR and managers. And it is. BUT. That does not mean that anybody else in the org, namely, peers and bystanders, don’t have a role to play as well. But as we all know too well, whether peers and bystanders will act is in many ways a byproduct of the context or system in which they operate. Under one set of circumstances, they tend to act. Under a different set, they won’t. Rowe set out to identify the attributes of the system that will increase the likelihood of peers and bystanders taking responsible action. In her own words: 

Peers and bystanders are important in organizations and communities. Peers and bystanders can help to discourage and deal with unacceptable behavior. They often have information and opportunities that could help to identify, assess and even manage a range of serious concerns. Their actions (and inactions) can “swing” a situation for good (or for ill)… [bystanders] often have multiple, idiosyncratic, and conflicting interests — and many feel very vulnerable. As a result, many potentially responsible bystanders do not take effective action when they perceive unacceptable behavior. Bystanders are often equated with “do-nothings.” However, many bystanders report thinking about responsible action, and say they have actually tried various responsible interventions… Many peers and bystanders might do better if they had a conflict management system that takes their needs into account. A central issue is that peers and bystanders — and their contexts — often differ greatly from each other. As unique individuals, they often need safe, accessible and customized support to take responsible action, in part because of their own conflicting motivations. They often need a trusted, confidential resource. They frequently seek options for action beyond reporting to authorities.

She first defined or decomposed taking action as a 4-step process: 

  1. Perceiving behavior that may be unacceptable
  2. Assessing the behavior
  3. Judging whether action is required
  4. Deciding whether and how to make a particular personal response (or responses.)

Which in turn allowed her to distill the key reasons for why bystanders do not act or come forward: 

  • The bystander does not “see” the unacceptable behavior
  • The bystander cannot or does not judge the behavior
  • The bystander cannot or does not decide if action should be taken
  • The bystander cannot or does not take personal action

Conversely, bystanders do take responsible action if: 

  • They see or hear of behavior they believe to be dangerous, especially if it seems like an emergency, and especially if they think that they or significant others are in immediate danger
  • They perceive that an apparent perpetrator intends harm, especially if that person is seen to have hurt or humiliated family members or people like themselves
  • They wish to protect a potential perpetrator from serious harm or blame 
  • They are angry, vengeful or desperate enough to ignore the “barriers to action”
  • They are certain about what is happening, and they believe they have enough evidence to be believed by the authorities

With the spectrum of drivers that discourage and encourage actions more clearly mapped out, she was able to identify and prescribe 8 systemic leverage points that are likely to create the context that will encourage bystander action: 

  1. Provide training and discussions sponsored and exemplified by senior leaders
  2. Build on safety and harassment as issues of special importance
  3. Share frequent and varied success stories
  4. Appeal to a variety of socially positive motives
  5. Discuss the potential importance of imperfect “evidence”
  6. Provide accessible, trusted resources for confidential consultation
  7. Provide safe, accessible and credible options for action
  8. Improve the credibility of formal options

If this is a topic that’s particularly relevant in your own organization, the full paper is well worth the read. 

Fostering responsible action by peers and bystanders [Rowe]

Care Pods [Enspiral]

Source: Boyatzis (2006)

A big hurdle in adopting alternative organizational “operating systems” (roles, responsibilities, etc.) instead of the traditional, single-hierarchy, authority-driven system has been the incomprehensiveness of those alternative systems. 

Some core elements of the collaborative efforts are simply left unaddressed by the alternative systems. Oddly enough, those gaps often have to do with the human aspects of the collaborative effort: compensation, performance management, hiring/firing, professional development, etc. — you know, the “easy” stuff…

So when I stumble upon a practice that seems to be filling some of these gaps, without relying on the traditional structures — it is certainly worth sharing.  

And such is the case with Enspiral’s Care Pods which offer a very compelling alternative for driving personal and professional development in a way that is not dependent on a manager (as-a-coach) role, or cumbersome feedback cycles. 

At their core, Care Pods aim to “operationalize”, or implement, Richard BoyatzisIntentional Change Theory (ICT) through a series of 8 sessions (that can then be run iteratively, in perpetuity) carried out by a small group of 4–6 people (aka the Core Pod). The slightly more detailed session plan with high-level agendas for each session and supporting exercises, is available in the original doc but here’s the summary of the summary: 

  • Session 1: Overview
  • Session 2: Getting started
  • Session 3: Ideal self
  • Session 4: Real self 
  • Session 5: Developing a learning plan for change
  • Session 6: Implementing the learning plan
  • Session 7: Care pod retrospective
  • Session 8: Iteration

To me, this seems implementation-ready in its current form and already a massive step up compared to the way 99% of organizations are handling personal and professional development today. It’d also offer a few additional tweaks to make it even better, in my opinion at least: 

  • I do think that this approach skews a bit too heavy towards the “self-reflection” pole of the “feedback”<->”self-reflection” polarity (more on this next week). What this means in practice, is that a thoughtful, well designed peer-feedback exercise that extends beyond the members of the Care Pod and carried out between sessions 3 and 4 can provide fantastic fodder for the formulation of a more accurate “real self” picture in session 4, leading to a more effective learning plan in session 5.
  • I would also either extend or split session 5 to create the space to introduce Immunity to Change as a core framework for understanding our “default” behavior, and using it to design behavioral experiments that are more likely to yield the change that we seek.
  • Lastly, I’d sprinkle in 1–2 purely “social” sessions, to strengthen connections between the Care Pod members in a more informal setting (drinks, dinner, some other outside-the-office activity). 

Net-net this is a fantastic practice that I’d be eager to implement in either the future org that I’ll join or the organizations that I’ll be consulting with. 

Care Pods [Enspiral]

Inclusive hiring: a short primer

As the debate about diversity metrics and quotas rages on, I’d like to share my attempt to find common ground and a path forward. 

To do that, let’s start by defining our “north star” first: 

A fully inclusive hiring effort is an effort in which we engage and attract all relevant candidates for the role, evaluate them fairly for exactly what the role requires (nothing more, nothing less) and give them a clear picture of what working at our company is like, so they can evaluate the opportunity fairly.

Note that it doesn’t include any references to diversity, identity, minority, etc. 

Now we can ask: what gets in the way of this ideal end-state? And the answer: our own “humanity”. Our susceptibility to certain biases in our thinking and actions which eventually manifest themselves as selection bias: either we end up selecting/rejecting candidates, or candidates selecting/rejecting us based on attributes, knowledge or actions that have no impact on their ability to do well in the role that we’re hiring for. 

Selection bias tends to creep up across 4 different dimensions of the hiring process. While they may have some overlap between them and are not fully mutually exclusive, discerning between them helps move us forward: 

  1. The way we attract/reach out to candidates
  2. The way we define what success in the role requires (and doesn’t require)
  3. The way we conduct the assessment of the candidate’s performance
  4. The way we evaluate the candidate’s performance in the assessment

With these dimensions in mind, we can now consider specific hiring practices and articulate their impact on helping us create a more inclusive hiring process. While these practices have a compounding impact when used together, they have little dependencies between them and can certainly be used in a more piecemeal or a-la-carte way.  

The list below is not comprehensive and I’m continuously adding to it, but I believe it to be a good start: 

Inclusive hiring: a short primer

If we know in which direction we want to go, does it matter where we are?

And for that matter, to where exactly we’re trying to get? 

One use of numerical measurement: to describe direction

Continuing to reflect on some of the topics that I’ve covered in the last couple of posts, I want to spend more time today on numerical measurement and its alternatives. 

While the answer to the question posed in the title may seem like a resounding “yes”, I’d like to suggest that in some circumstances, it may actually be “no”. 

To be clear, I’m not a measurement or metrics hater. And this is not a rant about metrics. But given that I’ve spent quite a bit of time calling out their deficiencies, I owe it to myself and others to offer alternatives that are not meant to completely replace them, but to expand the toolbox so we can use the more effective tool to the problem at hand. 

I won’t repeat my whole case on the challenges with numerical measurement but will just briefly mention that they typically elicit some non-trivial “operational” challenges in both the collection and interpretation of the data. And perhaps more importantly, they also pose some more “strategic” challenges — they reduce a highly complex reality into a very simplified representation. Sometimes that’s incredibly helpful — separating signal from noise and creating clarity on what’s truly important. But often times over-simplifaction leads to solutions with both cognitive and behavioral flaws when applied back in the complex reality. 

The alternatives to this approach depend on the purpose we were trying to accomplish with numerical measurement begin with, something that I’ve noticed I haven’t given enough attention to in the past. Introducing some distinctions there helps to identify viable alternatives, at least in the two cases outlined below. 

Measurement to articulate direction

One common use case of numerical measurement is to articulate direction. By describing where we are right now and where we want to get to, we implicitly define the direction in which we want to go: 

  • We want to get from point A to point B = we need to drive in direction C
  • We want to improve margins from 13% to 15% = we need to improve margins/become more efficient
  • *nerd alert* describing a vector using the coordinates of its start and end points

Often times the start and end points are rather meaningless in and of themselves. It’s the direction or delta between them that matters. 

Yet describing the start and end point are not required to describe the direction. I can still “drive south” without saying “get from SF to LA”. Not the perfect example, I know, but hopefully still gets the point across. 

So alternatively, we can use either “even over” statements or slightly more detailed polarity maps to describe the direction we want to go. 

Measurement to choose between options

Another common use case of numerical measurement is to choose between options. 

For example, choosing what driver to focus on next in order to improve employee engagement. This is often done by having employees rate each one of the drivers using a Likert Scale, translating each rung in the scale to a numerical score and sorting the drivers from the lowest rated to the highest rated or from the one that worsened the most to the one that improved the most. 

The distinction between cardinal and ordinal utility can help us find an alternative. We don’t even have to go into the debate about the feasibility of truly measuring the cardinal utility of each one of the drivers and simply say that since we’re only using the measurement to choose between the options, the ordinal utility is sufficient. And in that case, there’s a simpler alternative to numerically rating each one of the drivers in isolation: asking employees to stack-rank the drivers from the one we should focus on the most, to the one we should focus on the least. 

If we know in which direction we want to go, does it matter where we are?

Employees as customers: from metaphor to analogy

It’s been interesting to notice the difference between concepts that I thought will be useful to keep in mind, immediately after reading a certain article, and the ones that actually proved out to be useful several months later when I find myself referencing them over and over again.

Of the three insights from Ikujiro Nonaka’s seminal paper The Knowledge-Creating Corporation I find myself using, again and again, his distinction between metaphor and analogy:

One kind of figurative language that is especially important is metaphor. It is a way for individuals grounded in different contexts and with different experiences to understand something intuitively through the use of imagination and symbols without the need for analysis or generalization.Through metaphors, people put together what they know in new ways and begin to express what they know but cannot yet say. As such, metaphor is highly effective in fostering direct commitment to the creative process in the early stages of knowledge creation…

But while metaphor triggers the knowledge-creation process, it alone is not enough to complete it. The next step is analogy. Whereas metaphor is mostly driven by intuition and links images that at first glance seem remote from each other, analogy is a more structured process of reconciling contradictions and making distinctions. Put another way, by clarifying how
the two ideas in one phrase actually are alike and not alike, the contradictions incorporated into metaphors are harmonized by analogy. In this respect, analogy is an intermediate step between pure imagination and logical
thinking.

The context in which it is most present for me right now is thinking about employees as customers, which I’d argue for many organizations is still “stuck” in the metaphor stage of knowledge creation. But before I jump to the opportunity that lies ahead of us, I want to acknowledge the celebration-worthy progress that the current stage represents.

Thinking about employees as customers is a massive step forward compared to the previous organizing metaphor: employee as resources/machines. First and foremost it acknowledges that employees are human beings and need to be treated as such. It reminded us that employees are in choice about their actions: they choose to join, they choose to stay, and they can choose to leave. It also served as a directional inspiration for how to address many employee challenges by borrowing concepts and ideas from the customer domain:

  • Lead generation/business development → Sourcing
  • Sales → Recruiting
  • Sales Marketing → Recruiting Marketing
  • Product brand → Employer brand
  • Product value proposition → Employee value proposition
  • Net Promoter Score (NPS) → employee Net Promoter Score (eNPS)
  • Customer journey/lifecycle → Employee journey/lifecycle
  • Customer onboarding → Employee onboarding
  • etc.

The metaphor continues to provide inspiration to this day, with more customer concepts making their way to the employee domain. A more recent example is recognizing specific “moments that matter” in the customer’s lifecycle, which require special design and attention, as also relevant for employees.

However, while the metaphor continues to move us forward in some ways, its drag, or downside, if you will, is also starting to become more apparent in cultural challenges such as unjustified entitlement or learned helplessness among employees which in turn make efforts to improve the shared working experience somewhere between extremely hard to impossible to execute on.

A more concrete example is the heavy reliance on surveys as the primary means of engaging with employees, a tool that was borrowed directly from the customer domain to the employee domain. Employee interaction needs to be bi-directional and iterative, and it needs to revolve not just around the present state but also around creative problem solving: what each of us can do about it. Yet surveys tend to move the conversation exactly in the opposite direction.

Nonaka’s work paints a clear path forward: moving away from metaphor and towards analogy. While the key focus in the former is around looking for similarities as sources for inspiration, the key focus in the latter is around looking for differences (distinctions) and addressing them, creating a more refined representation of reality.

At the root of most of the customer concepts that get pulled into the employee domain and end up backfiring seem to be a handful of distinctions, ways in which customers and employees are NOT alike. I suspect I’ll continue to refine these over time but here’s what I have so far:

  • The core interaction between customer and service provider has a clear division of roles: I, the customer, have a problem that I’m trying to solve, and you, the service provider, are supposed to provide me with a solution to it. Inside the organization, it’s not that clear cut: we are working to accomplish a shared mission together, and division or roles and authority is more dynamic and less absolute. We are all part of the problem and we are all part of the solution.
  • Cross-customer interaction, as it pertains to the organization, is relatively weak (mostly word-of-mouth reputation) so thinking about the way the organization interacts with each customer in isolation is a pretty accurate description of reality. Cross-employee interaction, as it pertains to the organization is very strong — tight collaboration to accomplish shared goals. So the way the organization interacts with each employee cannot be thought of in isolation.

Acknowledging these differences and designing ways of working together with them in mind is an important frontier in the future of work.

Employees as customers: from metaphor to analogy

Real-time, Continuous DIB (Diversity, Inclusion, and Belonging)

Photo credit: https://kumu.io/

Author’s note: it’s been a while since I had a chance to write a post completely “from scratch”, not having it based on a particular article or book. This one has been brewing in my head for some time now and I’m excited to share it with you all!

“You can’t manage what you can’t measure.” — Peter Drucker

“If you give a manager a numerical target, he’ll make it even if he has to destroy the company in the process.” –W. Edwards Deming

“Not everything that counts can be counted, and not everything that can be counted counts.” — Bruce Cameron

This trio of quotes captures beautifully the fundamental tension that we’re all trying to navigate when we work hard to make our organizations better. Without a clear measure of progress, it’s hard to know whether we’re making progress and whether our current efforts help advance us towards our goal or move us away from it (Drucker). However, not everything that we care about can be measured (Cameron), and sometimes trying to force the issue and measure something can lead to pretty painful, unintended consequences (Deming).

 The current state of DIB efforts

Nowhere is this tension felt more today than in our collective efforts to make our organizations more diverse and our behaviors more inclusive, fostering a deep sense of belonging among our teammates. Figuring out what to measure and what progress looks like remains a heavily debated topic.

Measuring diversity is becoming a more popular practice because it seems easy at first. But when we dig a little deeper and grapple with less easy to measure aspects, such as socioeconomic status (see Aline Lerner’s response here), not to mention intersectionality, Deming’s observation seems closer to the truth.

Measuring inclusion is perhaps more critical since it seems to have a more profound business impact. Not to mention that improving diversity without any follow up deliberate action will most likely decrease inclusion. However, inclusion turns out to be more difficult to measure and improve.

Often stumped by this challenge, many HR organizations turn to their “silver bullet” measurement tool and attempt to use our all-purpose-hammer: the survey. Yet, as the folks at Cultivate so eloquently point out, survey data suffers from a myriad of human biases: from recency bias, through acquiescence bias, to self-reporting and social desirability bias. And I will further add some more “mechanical” challenges such as selection bias (partial participation) and proper statistical analysis of the results.

Supporting inclusion also requires a different “type” of measurement. Since improving inclusion requires human behavior change, feedback (measurement) needs to be a lot more frequent and timely in order to make a difference. Learning today that there was something that I could have done differently two months ago is not so useful. Learning about it immediately, or even an hour later can be transformational, since the window for corrective action is still open.

To find a solution to this conundrum, we need to take a slight detour and familiarize ourselves with a much lesser known tool in our toolbox, that’s currently undergoing a profound revolution.

Organizational Network Analysis

Organizational Network Analysis (ONA for short) is the process of studying the relational and communication patterns within an organization through the use of models (graphs) of said relationships/interactions and conducting analysis, often statistical in nature, do derive various insights at both the group and individual levels. For example, these models/graphs, often referred to as “sociograms”, can be used to evaluate the overall level of “closeness”/”density” of relationships inside the organization by measuring the average “distance” (number of connections) that it takes to get from any one place in the network to any other place in the network. At the individual level, it is fairly easy to identify “outliers” — the people that are least connected to everyone else in the organization. A slightly more comprehensive overview of ONA can be found here.

While the roots of ONA can be traced all the way back to the work of Emile Durkheim in the early 1890s, real research began in earnest in the 1930s and made significant leaps forward in the 1970s and 1990s as more sophisticated technology unlocked more complex analysis of the data. Today, ONA is offered as a standard service by both top-4 consulting shops like Deloitte and boutique consulting firms specializing purely in ONA like Culture Optix and Tree Intelligence. 

But ONA never achieved wide, mainstream adoption. Most HR organizations today don’t even know that the tool exists, let alone use it in their day-to-day practice. I believe this is due to two main reasons:

  1. The cost of ONA in terms of time, energy and effort remained high. Even though technology helped in the analysis portion, the data collection process required for the construction and update of the sociograms remained mostly analog, relying heavily on survey data with all their drawbacks covered above, significantly constraining both the type of data that can be collected and the frequency by which it can be collected.
  2. The benefits of ONA remained fuzzy. Partly due to the data collection constraints, partly due to the relevant research still being in its adolescence stage, and partly due to not-so-great product management, the value proposition of using ONA and the types of organizational challenges that it can help address remained too broad and too shallow, never scratching a big enough itch to justify the complex execution and analysis.

But all of this is now changing.

The digital revolution

In the last two decades organizations have been undergoing a digital revolution in the way they collaborate and work together: from the pervasive use of email, through video conferencing, to instant messaging. Furthermore, many analog activities still generate some digital “footprint” — from calendar invites to digital work artifacts like documents, spreadsheets, and code.

This revolution opened the floodgates of data towards a new era of ONA in which not only can sociograms be constructed and maintained almost effortlessly, in real-time and with no human bias, but also the richness and granularity of the data that can be analyzed exceed by orders of magnitude what was possible a decade ago.

Companies like CrossLead, Kumu/Compass, and Cultivate are the early pioneers that have started exploring this rich sea of opportunities.

And companies like Humanyze continue to push the envelope even further by creating solutions that deliberately generate digital footprints to the analog interactions that currently don’t organic ones.

DIB + ONA = 

By now you should probably be able to tell where I’m going with this:

I believe improving inclusion is the “killer app”, the “thin edge of the wedge” if you will, for the broad adoption of next-generation ONA. 

ONA, with its analytical orientation towards identifying individual and group relational patterns, and the ability to perform it seamlessly on an on-going basis is perfectly positioned to close the currently-broken feedback loop and provide us with the close-to-real-time feedback needed to drive real behavior change.

ONA can help us identify the overall state and trend, as well as both the “bright spots” (to learn from) and “hot zones” (to help) across many inclusive dimensions including but not limited to:

  • Use of gendered language
  • Communication silos and the people who connect them
  • Outsiders and bridges
  • Balance of communication frequency across teammates
  • Balance of communication time/reciprocity
  • Communication inside/outside working hours
  • Communication sentiment (positive, negative, etc.)

Marrying DIB and ONA presents an opportunity to leverage the heightened awareness around this hot-button, critical topic and gain an edge in a red hot super-competitive market for both HR leaders and software vendors alike.

Real-time, Continuous DIB (Diversity, Inclusion, and Belonging)