Introduction
In recent decades, the continuous development of the Internet is inseparable from the collection of people’s data information, and the application of these information is increasingly affecting people’s lives, becoming more and more important. The analysis and processing of data information are inseparable from the participation of algorithms. Algorithms are rules and procedures based on data for activities such as computation, data processing, and automatic reasoning (Flew, 2021), which means that algorithms and algorithm choices are going to have more and more influence. However, as a result of this new trend, algorithmic making-decisions have always involved various legal and ethical issues such as privacy/bias/security, and algorithmic questions have become a hotly debated topic. This article will discuss how to conduct effective normative and ethical algorithmic decision-making and accountability with the discussion of the case: “Qing Lang” Operation.
Algorithms Are Everywhere In Life
Algorithm selection on the Internet refers to “the process of assigning content relevance to information elements in a data set through an automated statistical evaluation of distributed data signals, i.e. the computational process in which user input interacts with a data set to produce output. In a sense, all Internet systems are “algorithmic” and touch every aspect of our lives. They receive specific inputs and produce specific outputs by means of computation. Some of these involve explicit programming steps in which existing knowledge about the world is formally represented to enable software agents to reason and reason based on that knowledge, such as ranking and categorization of various levels of information (de Laat, 2017), such as the QS university rankings that we know. The other, based on “machine learning” algorithms, is in the field of artificial intelligence. Machine learning involves training models using learning algorithms that use large data sets of related past phenomena to classify or predict future phenomena (de Laat, 2017), such as predictions of the probability of future earthquakes. Although the two methods differ in how they obtain their prediction and classification capabilities, both can be seen as examples of algorithmic decision systems because they can automatically obtain decision-related outputs from given inputs (de Laat, 2017).
Since about 2005, algorithms have become increasingly effective (Flew, 2021) and widely used in all aspects of our lives. This is the result of major improvements in computing power and speed, and, more importantly, in the amount of data that can be processed, making people’s lives more convenient and algorithmic.
In addition to the functionality of the algorithm, Its significance lies in the society, as we spend more time on the Internet, participate in more activities, related to the algorithm of decision rules and data-driven process begins “affect our idea of agenda-setting and framework” and “therefore we how to act, so that” based on the algorithm of choice applications increasingly attention market “production and distribution. Algorithms form a form of what Adrienne Massanari calls “platform politics”: “a collection of designs, policies, and norms… . Encourage certain kinds of cultures and behaviours to combine on the platform, while implicitly discouraging others (Flew, 2021).”
Limitations And Challenges Of Algorithms
Algorithms and the ways they use big data to shape individual and social outcomes. Professor Terry Flew (2022) summed up the challenges facing five algorithms: ethical and legal (do individuals and groups have the right to question the decisions of algorithms that affect their lives?). ; Bias/fairness and transparency (information is always made in favour of the more powerful group, reflecting social biases); Accountability and institutions (if decisions are perceived to be made by “machines,” who is held accountable for the results?); The governance of data and privacy (the disclosure of personal information may be involved in the process of data collection) and the impact on human behaviour (for example, whether machines deviate from existing beliefs or contents when making learning choices). These problems will be long discussed with the development of algorithms. Because in a democratic society, there is a tension between, on the one hand, the need for universal political and moral rules that treat everyone the same; Rational people, on the other hand, may disagree about the intellectual, value, and moral issues these rules may determine.
Social Responsibility Of Algorithms
Since the evolution of algorithms has been accompanied by many ethical security concerns, effective policing of the Internet has become particularly important. Who is responsible for the results of the algorithm? We need to discuss the accountability of algorithms.
Reuben Binns (2017) argues that in order to address these challenges, proponents of algorithmic account capabilities should focus on the democratic political ideal of public rationality, namely, by ensuring that decision-makers can interpret the output of their systems according to cognitive and normative standards acceptable to all rational people, Thus Reuben Binns (2017) has a constraint effect on the decision-making power of the algorithm. Taking the general principle into account first in the accountability system can not only prevent the violation of existing ideas caused by over-reliance on algorithms for decision results but also help to regulate decision-makers and reduce the bias caused by result orientation. When the decision maker’s explanation of his algorithmic system is not sufficient, the expression is not clear or clear, resulting in the legality of the decision is not clear, public reason can help. Moreover, the public reason may be more than just a constraint on decision-makers; It may also set limits on the types of grievances that decision-makers can expect sympathy for.
Another potential challenge to accountability is the opacity of algorithmic systems. So could more transparency in the system help restore accountability? There are several reasons against full transparency, namely the loss of privacy when data sets are made public, the adverse impact of the disclosure of the algorithms themselves, the potential loss of a company’s competitive advantage, and the inherent opacity of some complex algorithms. In this regard, Paul B. de Laa (2017) conducted relevant research on the problem of algorithm transparency. Through analysis, he believed that at present, the full transparency of supervisory institutions is the only viable choice, and it is not desirable to promote it to the general public. Privacy concerns make it unwise to give the underlying data set to anyone for free; That would mean an invasion of privacy. Second, full transparency about machine learning models in use may invite those concerned to game the system, undermining its efficiency. Third, as a rule, companies emphasize property rights over algorithms (de Laat, 2017). As can be seen, full transparency may provoke greater conflict between decision-makers and those who benefit from their decisions. However, supervisory bodies enjoy sufficient transparency to facilitate decision-making in supervision and implementation.
In addition, Zerilli (2019) proposed that algorithm-based decisions should not be applied to high-risk or safe-critical decisions unless it is not necessary for “machines” to control algorithms and humans to rely too much on machines unless the relevant fields in the relevant system are clearly “superior to humans”. A complementary relationship between highly skilled algorithmic tools and collaborative human agents should be achieved.
Case Study — “Qing Lang” Operation
The “Qinglang” Operation is a good example of algorithmic accountability and regulation in China.
With the rapid development of China’s Internet market in recent years, a large amount of capital has been injected into the network platform. Some enterprises or network accounts have misused the discourse power brought by traffic, thus disrupting the healthy development of China’s Internet. The Cyberspace Administration of China (CAC) has launched a series of regulatory activities targeting Internet nonsense.
The Qinglang Operation is a special Internet operation code-named “Qinglang” Operation by the Cyberspace Administration of China (CAC) since 2016. The focus of the operation varies from year to year. In the 2022 China Qinglang Action, the special action for comprehensive governance of algorithms is an important part of governance activities. Its goal is to investigate and rectify the algorithm security problems of the Internet enterprise platform, promote the normalization and standardization of algorithm comprehensive governance, and maintain the security of the network environment (Jiemian News, 2022).
This year, The main measures for the governance of the algorithm in Qinglang are: organizing Internet enterprises to carry out self-examination and evaluation of the algorithm security ability; Cooperating with local network information departments to conduct an on-site inspection of local enterprises; Urge enterprises to complete the sorting of algorithm application and timely carry out the filling of algorithm filing information; Supervise and urge enterprises to equip with the scale of business algorithm security governance institutions and specialized personnel, establish and improve algorithm security-related rules and regulations; Rectify problems within the deadline. The program has also achieved great results (Jiemian News, 2022).
As can be seen from the “Qinglang” operation, the Chinese government has played a regulatory role in the Internet and Enterprises assume direct responsibility for algorithmic systems. The filing of algorithms for Internet companies is regarded as a kind of disclosure of algorithm system transparency at the regulatory level. Moreover, the participation of the state and the formulation of regulations will standardize the algorithm system in a legal scope.
Conclusion
As for algorithmic decision-making and social responsibility, given the growing influence of algorithms and their emerging problems, there are growing calls for accountability. At present, at the level of social responsibility, decision-makers must provide reasonable reasons for the output of the automated system of the algorithm and take responsibility for the algorithm. On the premise of full consideration of public rationality, the algorithm accountability requirements should be positioned within a reasonable framework. In addition, in terms of the transparency of the algorithm system, the full transparency of the supervisory body is more conducive to the reasonable implementation of the algorithm, avoiding risks and ensuring the maximum benefit of the algorithm service. But we also need to make it clear that the disputes brought by algorithms will be long-term and cannot be completely solved. What we need to do is to seek a relative balance between development, interests and social moral supervision.
Nowadays, algorithms are in a highly developed era. In order to ensure that the choices of algorithms are ethical and legal, and give full play to their greatest advantages, and promote the development and progress of human beings, we still need to keep a positive and long-term perspective, and constantly explore and try and error. Only in this way can we create a better network environment and promote the progress of science and technology and human development.
Reference
Binns, R. (2017). Algorithmic Accountability and Public Reason. Philosophy & Technology,
31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5
de Laat, P. B. (2017). Algorithmic Decision-Making Based on Machine Learning from Big
Data: Can Transparency Restore Accountability? Philosophy & Technology, 31(4),
525–541. https://doi.org/10.1007/s13347-017-0293-z
https://doi.org/10.1007/s11023-019-09513-7
Flew, Terry (2021). Regulating Platforms. Cambridge: Polity, pp. 79-86.
Jiemian News. (2022). Cyberspace Administration of the CPC Central Committee (CAC) :
Launched the special action of “Qinglang · Comprehensive Management of
Algorithms in 2022″
https://baijiahao.baidu.com/s?id=1729504181490024346&wfr=spider&for=pc&searc
hword=%E6%B8%85%E6%9C%97%E8%A1%8C%E5%8A%A8%E7%AE%97%E
6%B3%95%E6%BB%A5%E7%94%A8%E6%B2%BB%E7%90%86%E6%88%90%
E6%9E%9C
“Qing Lang” Operation announced a comprehensive rectification of evil artists illegal
comeback and other ten key tasks. (2022). sohu.com.
https://www.sohu.com/a/530555649_114941
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Algorithmic Decision-Making
and the Control Problem. Minds and Machines (Dordrecht), 29(4), 555–578.