2017-08-16

Since we can't challenge diversity policy, how to prevent mistakes?

The James Damore affair at Google has made it very clear that discussion of companies' diversity policy is completely off the table. When I say "discussion" here, I mean "anything other than adulation". I've seen plenty of the latter in the past week. The recent 'letter from Larry Page' in The Economist was a classic example. It was in desperate need of someone tagging it with a number of [citation needed] starting from paragraph 4:

You’re wrong. Your memo was a great example of what’s called “motivated reasoning” — seeking out only the information that supports what you already believe. It was derogatory to women in our industry and elsewhere [CN]. Despite your stated support for diversity and fairness, it demonstrated profound prejudice[CN]. Your chain of reasoning had so many missing links[CN] that it hardly mattered what you based your argument on. We try to hire people who are willing to follow where the facts lead, whatever their preconceptions [CN]. In your case we clearly got it wrong.

Let's accept, for the sake of argument, that random company employees questioning diversity policy is off the table. This is not an obviously unreasonable constraint, given the firestorm from Damore's manifesto. Then here's a question for Silicon Valley diversity (and leadership) types: since we've removed the possibility of employee criticism from your diversity policy, what is your alternative mechanism for de-risking it?

In all other aspects of engineering, we allow - nay, encourage - ideas and implementations to be tested by disinterested parties. As an example, the software engineering design review pits the software design lead against senior engineers from other development and operational teams who have no vested interest in the new software launching, but a very definite interest in the software not being a scaling or operational disaster. They will challenge the design lead with "what if..." and "how have you determined capacity for metric X..." questions, and expect robust answers backed by data. If the design lead's answers fall short, the new software will not progress to implementation without the reviewer concerns being addressed.

Testing is often an adversarial relationship: the testing team tries to figure out ways that new software might break, and craft tests to exploit those avenues. When the test reveals shortcomings in the software, the developer is not expected to say "well, that probably won't happen, we shouldn't worry about it" and blow off the test. Instead they either discuss the requirements with the tester and amend the test if appropriate, or fix their code to handle the test condition.

Netflix's Chaos Monkey subjects a software service to adverse operational conditions. The software designer might assert that the service is "robust" but if Chaos Monkey creates a reasonably foreseeable environment problem (e.g. killing 10% of backend tasks) and the service starts to throw errors at 60% of its queries, it's not Chaos Monkey which is viewed as the problem.

Even checking-in code - an activity as integral to an engineer's day as operating the coffee machine - is adversarial. For any code that hits production, the developer will have to make the code pass a barrage of pre-existing functional and syntax checks, and then still be subject to review by a human who is generally the owner of that section of code. That human expects new check-ins to improve the operational and syntactic quality of the codebase, and will challenge a check-in that falls short. If the contributing engineer asserts something like "you don't appreciate the beauty of the data structure" in reply, they're unlikely to get check-in approval.

Given all this, why should diversity plans and implementations - as a critical component of a software company - be immune to challenge? If we have decided that engineer-authored manifestos are not an appropriate way to critically analyse a company's diversity system then what is the appropriate way?

Please note that there's a good reason why the testing and development teams are different, why representatives from completely different teams are mandatory attendees of design reviews, and why the reviewer of new code should in general not be someone who reports to the person checking in the code. The diversity team - or their policy implementors - should not be the sole responders to challenges about the efficacy of their own systems.

No comments:

Post a Comment

All comments are subject to retrospective moderation. I will only reject spam, gratuitous abuse, and wilful stupidity.