Affirmative feedback

When we give someone constructive feedback, we typically call attention to the negative impact that their behavior had on us, and invite them to change their behavior (and if we’re really doing a good job, we also commit to helping them do so). And that makes a lot of sense. Our brain makes sense of the world around us through the differences between what we expect will happen and what actually happen. So when someone behaves in a way that’s different than the way we’ve expected them to behave, it’s easy for us to notice that and call attention to it.

Explained though the Johari Window, we assume that the negative impact of their behavior falls in their “blind area” — know to us (others) but unknown to them. By disclosing it, we bring that insight into the open and enable growth and development.

Johari Window

But there’s another, hidden opportunity that we tend to miss. We incorrectly assume that any positive implications of their behavior are fully known to them and that these aspects of their behavior are fully conscious, deliberate and intended.

That is often not the case.

Just like behaviors with negative impact, behaviors with positive impact regularly fall in one’s blind area, and are unconscious or unintended. Therefore, there is a significant developmental benefit in bringing them into the open through affirmative feedback —  feedback that’s meant to reinforce a particular behavior pattern rather than encourage the changing of one.

Compounded by the fact that we usually find it easier to do more of something that we already rather than stop something we’re already doing or start doing something totally new, the impact of affirmative feedback can be much higher than that of constructive feedback.

Note that there’s an important distinction between affirmative feedback and praise: the former is still intended to serve a developmental purpose (just like constructive feedback), while the latter is intended more to demonstrate situational empathy, gratitude, and recognition of the actions taken.

Advertisement
Affirmative feedback

Learning from failure #3 [Mgmt 3.0]

The relationship between learning and failure is topic that I’ve covered a few times on this publication:

This week’s post is this simple diagram from the folks at Mgmt 3.0:

Celebration Grids

While calling it a “celebration grid” is a bit over-the-top in my mind, this nifty diagram packs a lot of insight into a simple visual

I like the behavioral progression from mistakes through experiments to practices, with an increased likelihood of the outcome beings a success and failure being treated differently under each behavioral phase. All the while, learning follows an inverted U shape. We learn through experiments, regardless of whether the outcome was success or failure.

Learning from failure #3 [Mgmt 3.0]

Surveys: exploring statistical significance

WARNING: Some stats and math ahead. Mostly based on this lovely post: T-test explained: what they mean for survey analysis

Who doesn’t like surveys?

Well, most people. And yet, we love using them in organizational contexts for various purposes.

One big challenge in using them in that context is that they are a one-sided exchange of information. And while that makes sense in many other contexts, for example, when asking customers for feedback about a product; inside the organization, what we’re really trying to create is dialogue, since survey “takers” have a big part to play in addressing any insights that may come up from the survey. But that’s a topic for a different post. Today, I want to focus on something a lot more concrete.

We like using surveys because they can provide us with a quantitative assessment of a situation. For example, to measure “how are we doing?” in a particular area and to track it over time or across different organizational demographics. But sometimes, if we’re not analyzing the data carefully enough, over-reliance on surveys can lead us to over-react.

Let’s say that we ran an inclusion survey in which participants were asked to respond to the following statement using our beloved 5-point Likert scale: “When I speak up, my opinion is valued”. When analyzing the survey results we discovered that women, on average scored a 4.5, while men, on average scored a 4.3. Can we say based on the survey data that men and women in our organization are not given an equal voice?

The answer, as always, is: “it depends”. Depends on what? Glad you asked! It depends on the following things:

  1. The size of our organization and the participation rate in our survey
  2. The confidence level we want to have in our answer. The standard 95% confidence level means that if we ran the survey again, we’ll reach the same conclusion 95% of the times.
  3. The difference in the means between the two groups
  4. The standard deviation of the responses in each of the group

1–3 are fairly straight forward. The standard deviations is the least intuitive of the bunch so we’ll focus on it and say that: assuming an organization of a certain size, in order for a certain difference in means to be statistically significant at a certain confidence level, the standard deviation of the results in each group must fall below a certain maximal threshold.

More so: the smaller the org (or the lower the participation rate), the smaller the difference in means and the higher the confidence level required— the lower the standard deviation threshold will be.

Let’s make this a bit more concrete: assuming a best case scenario in which there’s full participation in the survey and the groups are of equal size — these would be the standard deviation thresholds for various combinations of org size (n), confidence levels, and difference in means:

Standard deviation thresholds for statistical significance of difference in means at varying confidence levels and org sizes

So in a 100-person organization, in order for a 0.1 difference in means to be statistically significant at a 95% confidence level, the standard deviation of both groups must be below 0.25. Keep in mind that this is the best case scenario, so if participation was lower or the groups were not equal in size, that threshold will be even lower.

Which leads us to the next question: what does a 0.25 standard deviation look like? Sure we can do the math and crunch the numbers, but for those of us (yours truly included) who don’t have a strong statistical intuition this may help:

Distribution of n=100 results on a 1–5 scale with standard deviations of 0.3, 0.6, 0.9, 1.2

The next time I’m running a survey, before jumping to action simply by looking at the means, I plan to look up my standard deviations at the table above and figure out whether action is truly needed. I’d encourage you to do the same 🙂

Surveys: exploring statistical significance

Flexible Work [Werk]

When we talk about giving employees more flexibility around doing their work, we often time have different dimensions of flexibility in mind.

The team at Werk did a great job in offering a shared taxonomy for talking about flexibility:

Flexibility 101

The make a distinction between 6 types of flexibility:

  1. Work from home (Werk: DeskPlus™) employees are based out of a company office, but can work at a location of their choosing for some portion of their time. Utilizing location variety can enhance productivity, reduce the burden of a long commute, increase creativity, and/or meet other needs.
  2. Part Time employees work on a reduced hours schedule. Part Time does not mean an individual is no longer in an advancement track role — employees utilizing Part Time have the experience and skills to meet their objectives on a reduced hours schedule.
  3. Step way (Werk: MicroAgility™) employees have the autonomy to step away from their work to accommodate the unexpected in micro increments of 1–3 hours. Employees are responsible for communicating their plans and meeting their daily objectives. The ability to make micro-adjustments to the workday prevents an employee’s personal life from becoming a major work life disruption.
  4. Flexible workday (Werk: TimeShift™) employees reorder their working hours to create an unconventional schedule that optimizes productivity and performance. An employee could shift their workday an hour to avoid a long commute, to break their day into sprints, or in a formalized condensed work week program.
  5. Minimal Travel (Werk: TravelLite™) employees have minimal to no travel, with a maximum of 10% travel annually (2–4 days per month or its annual equivalent). Employees can reduce travel requirements by utilizing virtual meetings.
  6. Remote employees do not work at a company office — they can work from anywhere. While many Remote arrangements are fully location independent, some may have location considerations, such as the need to attend occasional in-person meetings or service a region.

I find these distinctions very valuable in giving us shared language to discuss this broad and amorphous topic.

My biggest qualm with the Werk framework is their decision to brand (and trademark…) the types of flexibility that did not have a broadly accepted definition around them. Looking at the field psychology as an interesting case study, the need to “brand” different psychological techniques resulted in a proliferation of brands with highly similar to completely identical underlying principles. Which in turn made it harder to see the forest for the trees and slowed down progress.

Flexible Work [Werk]