Cyberfeminism : Brief Discussion Of Kriti Sharma's And Robin Hauser's Foretelling Of Artificial Intelligence And Its Interaction With Humans

Introduction : This blog has been written as a part of the Thinking Activity assigned by my professor Dr Dilip Barad regarding the Post-Humanist concept of 'Cyberfeminism.'


Further we will refer to the Ted Talks of Kriti Sharma on keeping human bias out of Artificial Intelligence as well as Robin Hauser on analysing the problem of protecting Artificial Intelligence from our biases respectively.

What does the Term 'Cyberfeminism' Signify?

There are variations on the coinage of the term 'Cyberfeminism'; in the webpage of Encyclopedia New Media, there is a claim that the term 'Cyberfeminism' was coined by Sadie Plant in 1994, whereas the London School of Economics and Political Science in their blog claims that the term was coined by Venus Matrix who inspired by Donna Haraway's 'Cyborg Manifesto.'

Apart from this, the concept of 'Cyberfeminism' is relevant to the real as well as virtual world. Whatever roots of gender binaries, biases, patriarchy, and inherited psychological beliefs about women in society, now all these are supposed to seep in the man-made algorithms and still women have to struggle to pose their place in the space of Cyberworld, Artificial Intelligence, and Robotics as they all are the new and far-advanced portion of the human society across the world.

Kriti Sharma : How to Keep Human Bias out of AI :


My Take on the Talk by Kriti Sharma :

• "Robots getting citizenship of actual country."

• "How many decisions are made for you today by AI? And how many of these were based on your gender, your race, or your background?"

• "Algorithms are being used all the time to make decisions about who we are and what we want."

• "AI is being used to help the side whether or not you get the job in interview, how much you pay for your car insurance, how good your credit score is, and even what rating you are getting in your annual performance review; but these decisions are all being filtered through assumptions about our identity, our race, our gender, our age."

• "We reinforce our own bias in AI, and now it is screening out female candidates, hang on, if a human hiring manager did that, we would be outraged, we would not allow it, this kind of gender discrimination is not okay, and yet somehow AI has become above the law, because the machine made the decisions."

Robin Hauser : Can We Protect AI from Our Biases? :


My Take on the Talk by Robin Hauser :

"We all know that if you are human, you are biased, sometimes our biases are explicit, other times it is implicit or unconscious, either way in some cases biase is not necessarily a bad thing, I cross sides of the stree when I see a big scary dog is coming my way, bias at its core has a survival technique. But our bias gets to be a problem is it when it interferes with the way that we interact with society, when our unconscious biases lead us to make snap-judgements or assumptions."

"After all, machines are not human. Computers well intelligent don't have human brains. But what I learned alarms me, AI is not the super solution to solve for human bias. In fact, in many circumstances, AI is already biased as humans."

"Artificial Intelligence is meant to interact with humans, but if you add the human factor and don't account for human behaviour, then you run into losing control of machine, and losing control can lead to serious circumstances."

"And here is when it gets really dangerous. The Criminal Justice System is using AI to assign risk scores to defendants. These predictive algorithm automate the risk of defendant poses to public saftey. Judges use these scores to aid in decision making about sentencing parole, bail. So, ideally AI would strip bias from criminal justice system, but these tools draw from big datasets from previous cases, and historically in the U.S. black men are incarcerated at five times the rate of white men for the same crime, so by using these historical data, the scores will be biased which means these tools are further perpetuating racial inequality and could be contributing to sending innocent people to jail."

"And we need to have a conversation on who is governing AI, who is responsible for the ethical standards of these supercomputers. We need to have this conversation now, because once skewed data gets into these deep learning machines, it is very difficult to take it out."

"I urge all of you to think of this : do we want Artificial Intelligence to reflect society as it exists today? Or as an ideal equitable society of tomorrow?"

Thank you!

Comments