Introduction
Algorithms play a significant role on the Internet and digital media. In the digital era, it’s no doubt that everyone lives in a world where algorithms are omnipresent. The definition of Algorithms is ‘the rules and processes established for activities such as calculation, data processing and automated reasoning’ (Flew, 2021). Positively, algorithms bring various advantages and make our lives easier. There is a lot of evidence and research to demonstrate that people’s values and social outcomes are impacted by algorithms when they use search engines or social media.
The following content of this blog is going to study the causes of algorithmic bias and discrimination. Then through a case study to evaluate the impact of algorithms on people’s daily lives and social norms. Because biases and discrimination through algorithms have appeared in many areas, this blog will discuss a specific direction on gender biases and discrimination through algorithms and analyse how algorithms aggravate gender stereotypes and the situation of gender inequity.
Algorithmic bias and discrimination
Algorithms are like a mirror; it reflects our values and social norms. Courtney Thomas gave a presentation at TEDx themed on Tech bias and algorithmic discrimination. Summarized Thomas’s speech (2019), he suggested algorithm has the power to reinforce innate prejudices and cause the public to view the world with different levels of biases. As we know, bias and discrimination are social phenomena that usually stem from negative, stereotypical attitudes arising from human perception biases that deny specific communities’ race, gender, religion, sexual orientation, identity, and other elements. The purpose of discrimination is to differentiate specific groups and marginalize them. Algorithms are recognized to stand in a neutral position and not discriminatory because it’s not human. In the digital era, the appearance of biases extends to the internet, ‘algorithms biases’ are also an extension of social biases. ‘Algorithms biases’ refers to certain attributes of an algorithm that cause it to create unfair or subjective outcomes (LibertiesEU,2021). Humans designed and taught algorithms about input and learning from massive data. Under the influence of history and social norms, algorithms are developing by keeping learning human behaviour and ideology to reinforce societal bias.
Causes of algorithms biases
From the research, there are several reasons cause algorithms biases.
- Search engine results and page rank can influence users’ attitudes and decision
Nowadays, the public uses search engines frequently and considers the information received accurate and trustworthy. Most importantly, when users search and view results from the webpage, those top rank results are often the most attractive for them to click. On the one hand, ‘search engine outcomes may impact people’s opinions, choices, and actions, and search engines may be intentionally used by stakeholders to influence society by attempts of censorship and search engine optimization (Wijnhoven & van Haren, 2021). When search engines output ideas about social biases on gender, hierarchy, rights, identity, sexuality, etc., the high dependence and trust in the search engines lead to a subtle acceptance of oriented or biased opinions, thus reinforcing stereotypes and social inequalities of certain communities. On the other hand, Noble (2018) indicates ‘the public believes that what rises to the top in search is either the most popular or the most credible or both’. Internet is powerful enough to collect amounts of information and search engine through algorithms to provide relevant results for users to solve their problems, whether those results are user-generated content, academic resources, or commercial advertising. Besides, those search results always link to the search company’s commercial partners or paid advertising. Therefore, the page rank not only assists users in finding related information, but also uses to toward ‘commercially more interesting’ content to attract users as well as satisfy advertisers’ interest.
- Algorithmic personalization
Algorithmic personalization is a kind of customized experience that ‘observes users’ digital habits and predicts their next choices’ (Invisibly, 2021). Most social media platforms and search engine companies provide personalized service, aim to satisfy more users’ needs, and enhance engagement. The current discussion about algorithmic personalization argues that ‘it increases relevancy and produce a different output per individual user.’ (Bozdag, 2013) Algorithms are constantly matching and analysing user information to recommend personalised content. Referencing Noble’s idea, Google as a case study, its personalization is giving users expected results on the basis of what Google collects data from users (Noble, 2018). Nevertheless, a concern has been proposed that algorithmic personalization avoids various information and opinions. Bozdag (2013) cites the idea that ‘explicit personalization will undermine deliberative democracy by limiting contradictory information’. To put it simply, personalized content would filter content with opposite opinions and consolidate the existing thinking. If people search for views tinctured with prejudice, algorithms will hide the other side of arguments and even strengthen social biases.
Case study
– Google personalized adverting

Figure 1. Google Personalized ads. Retrieved from https://betanews.com/2016/06/29/google-personalized-ads-privacy/
Research by Carnegie Mellon University revealed that ‘Google’s algorithms show better job ads for men’ (Gibbs, 2015). In comparison, women are not likely to see a high-paid job from advertisements recommended by Google. Discover to an examination, Google delivered job advertising directed against ‘more than 200 thousand executive positions’ 1,852 times to the male group and only 318 times to the female group (Gibbs, 2015).
Google’s personalized advertising policy indicates they are not allowed to target users based on race, gender, religion, sexual orientation, and personal identity. However, the mentioned data demonstrates gender inequality and biases still happen. Gender stereotype of women has been influenced by patriarchalism and patriarchal working culture. According to the data ‘Women in management’ by Catalyst (2021), there are 31 percent of women in senior management positions globally. Even though gender barriers and biases have declined, males have still dominated the workplace or industry. ‘Men are more suitable than women in leadership positions’ (Tabassum & Nayak, 2021) is the most obvious idea of gender stereotype. In other words, men always recognize their character as reliable, professional, and powerful, which are the best reasons to explain men have the ability to dominate and lead in the workplace.
Google’s personalized advertising relies on algorithms to collect users’ information and preferences to recommend relevant content advertising. However, Google has been criticized many times because of ‘personalization’ for its’ advertising system presenting the existence of gender biases and discrimination. The Markup found from the research that Google allows ‘advertisers to have an opinion to keep their advertising away from being shown to people of ‘unknown gender’ (Merrill, 2021). As mentioned above, Google’s advertising policies determine against gender discrimination, this policy is contradictory to allowing advertisers to target or exclude users based on gender categories. Especially people who identify themselves as nonbinary or transgender may feel offended and unrespect when the advertising algorithms exclude them.
Existing laws and platform governance policies make it difficult to effectively determine and regulate whether online advertising is biased and discriminatory. In the case of Google ads effectively deepening gender inequalities or even marginalizing people such as transgender, we need to understand who the search engine algorithms are serving. Search engines are the most common function that Internet users use every day. Until January 2022, Google has occupied 91.9% of the market share as well as it dominates the search engine market (Mohsin, 2022). Among the rest, 84% of respondents use Google 3 times a day or more often, representing the public has dependent on using search engines and gotten used to solving their problems through the internet. Mainly, people treat Google as an information platform free from commercial interest, but ‘Google functions in the interests of its most influential paid advertisers or through an intersection of popular and commercial interests’ (Noble, 2018). Basically, while users just type their question or several words at the searching box, the results not only include the answer they are looking for, but also recommend relevant advertising by algorithms to cater their client needs.
– Google suggestive autocompletion explores discrimination against women

Figure 2. UN Women ads campaign. Retrieved from https://www.unwomen.org/en/news/stories/2013/10/women-should-ads
In 2013, the UN Women established an adverting campaign to show Google’s suggestive autocompletion discriminated against feminine values and denied women’s rights. The advertising demonstrated women’s faces in a close-up shot but used suggestive autocompletion to cover their mouths. Typing’ women should’ in the search box, the autocompletion suggested ‘women should stay at home’, ‘should be slaves’, ‘should be in the kitchen’ (Mahdawi, 2013). Obviously, Google’s suggestive autocompletion operates through algorithms, it aims to improve search efficiency and forecast what information users are looking for. However, this campaign explores algorithms that are still diffusing and amplifying gender stereotypes, which are harmful to developing women’s rights and equality.
Algorithms is reinforcing gender stereotype
While the Internet and algorithms have brought convenience to society, it’s no doubt that algorithms have affected people’s understanding of identity, race, and gender. Nobel illustrates when typing ‘black girl’ into the search engine box, the autosuggest by the system associates black women with sexually gratifying objects, pornographically portraying black women. The phenomenon of the commodification and stigmatization of black women indicate the ‘ideal’ audience of search engine is assumed to be male, women’s bodies are usually to flatter male’s sexual fantasy (Noble, 2018).
Gender biases or racial discrimination has also been reflected in Google’s suggestive autocompletion. When users type some keywords, Google’s search engine would guess users’ thinking and recommend oriented results to users. At the same time, the UN Women advertising emphasized Google’s autocompletion algorithms were sexist and discriminatory against women. For general users, autocompletion may represent the most popular opinions in the world, those biased suggestions will bring an unmeasurable influence and further misunderstandings in the marginalized communities.
In addition, the mentioned case study about Google’s personalized advertising is more likely to recommend high-paid jobs for male users. Gender biases and social norms established by the market and society, proposal ideas such as male is better suited to management positions than female; male has advantages over female in the workplace. That evidence proves Internet and digital platforms are still demonstrating the existence of gender stereotypes. ‘The internet is a communication environment that privileges the male’ (Noble, 2018). And algorithms are playing an essential position that reinforces the ideology of masculinity and male-dominated in the patriarchal society.
How to mitigate algorithmic biases and discrimination?
Thomas (2019) indicated in his speech that there is little oversight and transparency currently and has no standard for regulating algorithms. Therefore, in order to avoid algorithmic biases making a more negative impact on the public and reinforcing social inequality, increasing ‘algorithmic transparency’ may be an effective way. Briefly, algorithmic transparency defines as an ‘openness about the purpose, structure and underlying actions of the algorithms used to search for, process and deliver information (TechTarget, 2015). Because usually, humans do not understand the process that a computer goes through to arrive at a particular result, transparency will help people understand the mechanism of algorithms. Most importantly, increased transparency in algorithms has led the public and government to regulate how algorithms are used, reducing the negative impact of unethical, biased information on society. At last, if say algorithms can reinforce biases and discrimination because it reflects human thinking and behaviour. It means algorithms have the ability to learn how to become an undiscriminating technique.
Conclusion
To sum up, this blog explains what algorithmic bias and discrimination are; discusses the causes and conditions of algorithmic biases while analysing Google’s case study. Biases and discrimination have existed in contemporary society for a long time and have now expanded to digital space. To be honest, it’s challenging to avoid discrimination in our daily lives completely. But algorithms are powerful enough to influence people’s attitudes, beliefs, and values. All the mentioned cases as evidence to certify algorithms are positioned in a significant place, reinforcing social biases, especially in gender. Before removing algorithmic biases and discrimination, the harms of those biases prompt everyone to consider how to use the algorithm ethically and make it more inclusive.
Reference
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209-227. https://doi.org/10.1007/s10676-013-9321-6
Catalyst. (2022). Women in management (quick take). Retrieved April 6, 2022, from Catalyst website: https://www.catalyst.org/research/women-in-management/
Flew, T. (2021). Issues of Concern. Regulating Platforms. John Wiley & Sons.
Gibbs, S. (2015). Women less likely to be shown ads for high-paid jobs on Google, study shows. The Guardian. Retrieved from https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study
Invisibly. (2021). Personalization Algorithms: Why it matters and how it impacts people. Retrieved April 6, 2022, from Invisibly website: https://www.invisibly.com/learn-blog/personalization-algorithms
LibertiesEU. (2021). Algorithmic bias: Why and how do computers make unfair decisions? Retrieved April 6, 2022, from Liberties.eu website: https://www.liberties.eu/en/stories/algorithmic-bias-17052021/43528
Mahdawi, A. (2013). Google’s autocomplete spells out our darkest thoughts. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2013/oct/22/google-autocomplete-un-women-ad-discrimination-algorithms
Merrill, J. B. (2021). Google has been allowing advertisers to exclude nonbinary people from seeing job ads- the markup. Retrieved April 6, 2022, from The Markup website: https://themarkup.org/google-the-giant/2021/02/11/google-has-been-allowing-advertisers-to-exclude-nonbinary-people-from-seeing-job-ads
Mohsin, M. (2022). 10 Google search statistics you need to know in2022. Retrieved April 6, 2022, from Oberlo website: https://au.oberlo.com/blog/google-search-statistics
Noble, S. U. (2018). A society, searching. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Tabassum, N., & Nayak, B. S. (2021). Gender stereotypes and their impact on women’s career progressions from a managerial perspective. IIM Kozhikode Society & Management Review, 10(2), 192–208. https://doi.org/10.1177/2277975220975513
TechTarget. (2015). Algorithmic transparency. Retrieved from https://www.techtarget.com/searchenterpriseai/definition/algorithmic-transparency
TEDxTalks. (2019). Tech Bias and Algorithmic discrimination [Video]. Retrieved from https://www.youtube.com/watch?v=N9XaLNfExgM
Wijnhoven, F., & van Haren, J. (2021). Search engine gender bias. Frontiers in Big Data. https://doi.org/10.3389/fdata.2021.622106