April 13th, 2017
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: email@example.com
I must warn you that parts of this post are disgusting, disturbing and awful. If you are having a rough day, feel free to save for another time. If you are already sick of seeing hateful language, this is likely not a post to read at present. That said, I feel my duty as a former journalist to look at it, expose it, and hope to spark better conversations around how we handle both implicit and explicit bias and prejudice in our models.
To dive a bit deeper into how these biases play out, let's do some standard analogies. We all know the King-Queen comparison, how might that apply to other professions? In: model.most_similar(positive= ['doctor', 'woman'], negative=['man']) Out: [('gynecologist', 0.7093892097473145), ('nurse', 0.647728681564331),...] So, Doctor - Man + Woman = Gynecologist or Nurse. Great! What else?
So, Computer Programmer – Man + Woman = housewife. Or graphic designer. Because of course women only do design work (never great male designers or amazing female DBAs). Now pay attention that some of these vectors have varying degrees of similarity (noted in the second element of the tuple); the higher the number, the closer the vectors. That said, these are real responses from word2vec.
There were many more offensive phrases I found, many of which I didn’t save or write down as I could really only stomach 5 minutes at a time of research until I needed a mental and spiritual break. Here are a summary of some I remembered and was able to find again:
mexicans => illegals, beaners
asians => gooks, wimps
jews => kikes
asian + woman => teenage girl, sucking dick
gay + man => “horribly, horribly deranged”
transsexual + man => convicted rapistSource