transparency in the future

4 downloads 0 Views 301KB Size Report
made? The one(s) who control algorithms control society. .... See Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work ...... hanen PPP 2016, the European Health Insurance Card is ordered via the internet. The order is.
Offprint of article from

transparency in the future – Swedish openness 250 years Anna-Sara Lind, Jane Reichel & Inger Österdahl (eds), Ragulka 2017

On the openness of the digital society: from religion via language … 285

On the openness of the digital society: from religion via language to algorithm as the basis for the exercise of public powers Markku Suksi*

1. Introduction The 1766 Freedom of Press (Constitution) Ordinance of the Kingdom of Sweden has its root in the secrecy of parliamentary proceedings that prevailed until 1766 and in the understanding that legitimate decisions by the legislative body cannot be made without access of the population to the materials underlying the decisions of the legislature. As an extension, this same understanding spilled over to decisions of governmental bodies and courts of law.1 The constitutional provisions of 1766 were in many ways a harbinger of a new era of legitimacy of public powers that was inaugurated by the 1789 French revolution, from which point on the role of religion as the legitimation ground for public power started to diminish, only to van*  I am greatly indebted for comments that my colleague at Åbo Akademi University, dr. Anssi Öörni, Professor of Information Systems, made from an information systems point of view concerning an early draft of this article, and I am particularly indebted to him for proposing a first idea to Figure 2. I also thank Ms. Heli Kauhanen, Unit Leader at the Social Insurance Institute of Finland, for valuable help with materials. The responsibility for all errors and misinterpretations is, of course, mine only. 1   See Mariya Riekkinen and Markku Suksi, Access to Information and Documents as a Human Right (Institute for Human Rights of Åbo Akademi University 2015) 6–16. See also Michael Roberts, The Age of Liberty – Sweden 1719–1772 (CUP 1986). Originally, the idea of openness in a free society may be traced back to section 21 in Thoughts on Civil Liberty by Peter Forsskål (http:// www.peterforsskal.com/thetext.html), originally from Helsingfors (or Helsinki in Finnish), but the political work to turn the idea into the principle of openness in 1766 was performed by Anders Chydenius from Gamlakarleby (or Kokkola in Finnish), who represented Ostrobothnian clergy in the Diet.

286  Markku Suksi ish later on: the king had been perceived as the power that was ordained by God for exercise of power in a top-down manner in society, but was not more so from the 19th century on. The new legitimation ground was found in the language of the majority in a manner that resulted in nationalism as the predominant discourse of legitimacy under the slogan “one people, one state, one language”. During the 21st century, it is possible to sense a paradigm shift of a similar kind as in the 18th and 19th century. Language as the legitimacy-creating fundament is starting to turn over into something different, into algorithm as the basis of decisions in the digital world. Today, algorithms are already used to carry out almost all decisions in the stock exchange when shares are sold and bought, which leads to commercial decisions in the blink of an eye,2 and algorithms in search engines keep track of individual preferences of consumers, seen as proposals on the computer screen on what the algorithm thinks that the consumer would like to buy or search.3 Recently, a discussion about the law of unmanned computer-steered vessels has begun,4 so, too, a discussion about lethal autonomous weapons and autonomous weapons systems5 and all kinds of robotics that will perform tasks 2   See Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/ EU, Article 4(1), para. 39: ”‘algorithmic trading’ means trading in financial instruments where a computer algorithm automatically determines individual parameters of orders such as whether to initiate the order, the timing, price or quantity of the order or how to manage the order after its submission, with limited or no human intervention, and does not include any system that is only used for the purpose of routing orders to one or more trading venues or for the processing of orders involving no determination of any trading parameters or for the confirmation of orders or the post-trade processing of executed transactions;” 3   See, in particular, Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015) 3–4, 8–10, 17, 20–22, 33, 66, 69–75, 83–84, 88, 102–103, 129–132, 193–194, 216–217, where the secrecy of algorithms used by private data giants such as Google in a manner that makes possible unprecedented control of the preferences of consumers and individuals is identified as a serious problem that should be addressed by appropriate legislation and oversight. 4   See, e.g., Michal Chwedczuk, ‘Analysis of the Legal Status of Unmanned Commercial Vessels in U.S. Admiralty and Maritime Law’ [2016] Journal of Maritime Law & Commerce 47, 123. 5   See, e.g., Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, on lethal autonomous weapons (LARs), Human Rights Council, Twenty-third session, Agenda item 3. UN GA A /HRC/23/47, 9 April 2013, Christof Heyns ‘Autonomous weapons systems: Living a dignified life and dying a dignified death’ in Nehal Bhuta et al. (eds), Autonomous weapons systems (CUP 2016) 3–19, Christof Heyns, ‘Human rights and the use of autonomous weapons systems (AWS) during domestic law enforcement’ [2016] Human Rights Quarterly 38 (2016), 350–378.

On the openness of the digital society: from religion via language … 287 that hitherto have been in the hands of human beings.6 Decision-making at least in public authorities but potentially also in courts and legislatures will, it can be predicted, increasingly move in a more digitalized direction, where algorithms are used to make decisions, very much for efficiency reasons. One result of this development is that language will be replaced by the algorithm (or, rather, that the algorithm will gradually subject language and languages to itself): there is a high likelihood that decisions of public bodies will increasingly be made in digitalized decision-making procedures where the interaction is with the computer interface of the public authority, where no physical person as a representative of the public authority is anymore making the final decision concerning, e.g., social benefits,7 speeding tickets,8 taxes9 or elections,10 6   See, e.g., European Civil Law Rules in Robotics. Study. European Parliament, Directorate-General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs. Brussels: European Parliament, 2016, at http://www.europarl.europa.eu/RegData/etudes/ STUD/2016/571379/IPOL_STU%282016%29571379_EN.pdf (accessed on 16 January 2017). This study opens up, inter alia, the discussion of whether or not algorithms should be granted legal personality in a manner comparable to companies and how algorithms used in robotics should be regulated. See also Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), Committee on Legal Affairs of the European Parliament, Rapporteur: Mady Delvaux, which is expected to result in a debate on Artificial Intelligence and robotics in the European Parliament during 2017. 7   As indicated below, in section 4, the Finnish Social Insurance Institution has transferred several of its simpler procedures of decision-making concerning benefits into an automated decision-making procedure. 8   Contrary to what could be expected, the traffic surveillance cameras used in Finland for the purposes of detecting speeding violations are currently not components of any automated system of decision-making. Instead, the photographs and the speed information are reviewed by a police officer, that is, a civil servant, who issues the speeding ticket after considering whether the grounds for a violation are fulfilled. 9   See, e.g., the Swedish Act on the Traffic Congestion Tax (2004:629), where section 2 of the Act says that it is the National Board of Traffic that decides, on behalf of the National Board of Taxation, on the congestion tax by means of an automated procedure on the basis of information in the road traffic register. In practice, this means that toll cameras make photographs about the registration number of each vehicle passing through and automatically charges the tax once per month to the owner of the vehicle. No civil servant is physically involved in this automated process, except in the case the owner of the car decides to start a review process concerning the automated decision on the tax. 10   See, e.g., Russian Federation Elections to the State Duma 19 December 1999, OSCE/ODIHR Final Report, at http://www.osce.org/odihr/elections/russia/16293?download=true (accessed on 3 October 2016), p. 23: “Although GAS Vybori has been in use for a period of time, it has not been approved by the State Duma as the source of “official” results. And, in spite of its apparent successes in these and past elections, critics still express concerns about its vulnerability to manipulation.

288  Markku Suksi and where decisions are executable within seconds after the information is inserted. Computers can also, in such scenarios, use digital registers of all kinds and combine information from these sources with the information submitted by the individual.11 As a consequence, the population in a society is, in the future, placed in a similar situation concerning the openness of the algorithm fashioned in terms of computer programmes as the population was in 1766 in relation to legislative and administrative decisions fashioned in a natural language. How would we know whether the digitalized decisions are correctly made? The one(s) who control algorithms control society. The government should, of course, maintain some control, but the government department managing an algorithm might not be interested in disclosing the algorithm, and such an algorithm might also be developed by a private software corporation that is uninterested in disclosing the source code. The question is thus whether an algorithm should be public, or can it remain secret? The proposal is that algorithms related to the exercise of public powers have to be public, or else the future society runs into similar legitimacy problems as in 1766. In addition, it is also possible to argue that algorithms dealing with non-public decision-making, such as consumer purchases, should be public, although there the principle of commercial competition may prevent a move in the direction of openness. The evolution from religion through language to algorithm as concerns the fundament of the legitimacy in society is creating challenges of all sorts for access to the most relevant decisions (or, rather, access to strings of programme code). At the core of these doubts is discomfort with the lack of transparency that surrounds the system.” Currently, the use of the state automated information system “Vybory” (GAS Vybory) is established in art. 74 of the Russian Federal Law “On Basic Guarantees of Electoral Rights and the Right of Citizens of the Russian Federation to Participate in a Referendum” and in Art. 94 of the Russian Federal Law “On Elections of Deputies to the State Duma of the Federal Assembly of the Russian Federation”. According to Code of Good Practice in Electoral Matters. Adopted by the Venice Commission at its 52nd session (Venice, 18–19 October 2002), para. 44, “the system’s transparency must be guaranteed in the sense that it must be possible to check that it is functioning properly”. Whether this implies the transparency of the algorithm is difficult to say, but the software producers are, according to an interlocutor, normally unwilling to open up the construction of the algorithm, which in principle is very simple as to its contents. According to the same interlocutor, it seems unlikely that anyone anymore today would try to influence the election result by corrupting the algorithm. Instead, any fraud is likely to take place before the votes are entered into a computerized system. 11   For universities, digitalisation could imply, for instance, a use of electronic exams and digitally graded MOOCs for the completion of the degree requirements on the basis of an algorithm.

On the openness of the digital society: from religion via language … 289

2.

Legitimacy of Public Decisions: from God to Language and further to the Algorithm

Today, during the first years of the 21st century, language is still the main technique of governance. This is so since the mid-19th century, when language was promoted to become the main governance technique, that is, as the point of departure and mechanism of legislative work and of implementation of legislation. The 1789 French Revolution brought about the collapse of the governance technique of the ancient regime, which was based on a strict stratification of society, with a king ordained by God at the top. According to the old governance technique, societal power was delegated to this world by God, and the king wielded that divine power as a worldly representative of God. In his turn, the king governed his kingdom by means of the estates so that the nobility consisted of the immediate subjects of the king, while the other estates were governed through the nobility (although in the Kingdom of Sweden, the nobility never acquired quite this kind of a role of an intermediary between the king and the other subjects). In the Diet, the estates convened for debate and decision-making (in the Kingdom of Sweden, there were four estates, nobility, clergy, bourgeoisie and landowning peasants; in most other countries the estates were three, nobility, clergy and bourgeoisie). Each estate designated its representative to the Diet by means of some procedure, often established by the estate itself. In this context of legitimacy through religion-based social stratification, language was a secondary consideration, because the different segments of the people were, in the first place, loyal to their king. When the idea of the divine origin of the king’s power and of the position of the estates as legitimating factors in this power structure was discarded towards the end of the 18th century, in particular through the French Revolution, a vacuum emerged for the legitimacy of exercise of public powers. This vacuum soon found a potential filling in the area of current Germany, where philosophers such as Herder, Fichte and Schiller developed, in the beginning of the 19th century and in part on the basis of the philosophy of Hegel, a national-romantic idea based upon language.12 According to the idea, there existed a mystical and even mythical people, which inhabited   Apparently, this philosophical paradigm change coincided with the era of industrialism, which is a technological paradigm change. See Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton & Company 2014) 6–7. 12

290  Markku Suksi a certain territory and which spoke one language. In fact, the ideological construction presupposed that the people in question had long and honourable traditions from the beginnings of history. It also presupposed that the territory in question had from the beginning been designated for this people only and that only one language, which started to be used as a vehicle of governance, was spoken in the territory. Nationalism was born. In this way, language was developed into a new technique of governance, which bound together the individuals belonging to the people as citizens living in the territory of the state. At the same time, “other” persons were informed of the fact that they do not belong to the people unless they live in the territory and speak the same language. The aim was to create a pure nation state. This national-romantic construction, however, had no realworld basis in Germany or anywhere else, because in different states, there normally lived one or more groups that spoke another language than the one language predicted by the ideology of nationalism. The merger of language and ideology in this way proved to be problematic, because the ideological dimension tended to overturn the neutral nature that could have been possible for a technique of governance entirely based on language. At this juncture, it is possible to ask whether or not a similar paradigm shift is going to take place with respect to language as took place with respect to religion at the end of the 18th century, but this time a paradigm shift from language-based legitimacy to digital decision-making where legitimacy is supplied by the operation of algorithms.13 In this respect, the 1980s and 1990s would mark a decade when public authorities started to fashion various decision-making mechanisms so as to make possible contacts between citizens and public authorities by means of digital tools. In the beginning of the 21st century, this development has intensified so as to make it possible to envision a development where decision-making by public authorities is increasingly automated and run on the basis of algorithms. This development is by no means finished as of yet, but looking into future, it is possible to discern an increasing intensity in digital decision-making. 13   See, e.g., Markku Suksi, ‘Rätten att använda sitt eget språk’, in Endre Brunstad, Ann-Kristin Helland and Edit Bugge (eds), Rom for språk: nye innsikter i språkleg mangfald (Novus forlag 2014) 36, footnote 6. See also Brynjolfsson and McAfee (2014) 7–8, who think the digital age is a similar technological paradigm change in the area of mental power as the (steam) engine was in the area of physical power. This technological change might also coincide with a more philosophical paradigm change from language to algorithm as the basis of legitimacy.

On the openness of the digital society: from religion via language … 291 The evolution of the paradigm shifts in the area of legitimacy for the exercise of public authority may be presented in the following way (figure 1): Figure 1: Paradigm shift in governance technique: from religion via language to algorithm.

Religion ca 1000 -> 1900s Rule of king 1789 French revolution Priest

Language ca 1800 -> 21st century Rule of law Nation state, nationalism Lawyer

Algorithm ca 1980s -> future Rule of algorithm Automatisation based on digitalisation "Algoreader"

The visualization of the paradigm changes in legitimacy of public authority should be understood as a gradual process where the paradigm change is not very abrupt. Instead, each paradigm shift is a relatively slow process that takes decades to complete. In fact, the paradigm change might never be totally complete, because a previous legitimacy paradigm may to some extent be subsumed within the subsequent legitimacy paradigm so as to preserve at least some characteristics of the previous one. Therefore, because the overlap of the new paradigm is not totally complete, some traits of the previous paradigms remain detectable during the era of a new paradigm. This would mean that some features of religion as the legitimacy paradigm have been preserved in the language-based legitimacy paradigm,14 and the same is likely to happen with the next paradigm shift: the algorithm paradigm or digital paradigm will preserve some of the language-based legitimacy paradigm. What this would seem to imply is that an algorithm-based decision-making system could produce the material decisions and services in any language. To speculate wildly, such a decision-making system could make, for instance, the 14   However, it seems that globally, the religion-based legitimacy paradigm is still in operation in a good number of states, such as in the states where Islam is followed as a state religion. Arabic as a marker of nationality has not succeeded in replacing religion as the legitimacy paradigm. It remains to be seen whether the paradigm of religion in Islamic countries lets itself be succumbed to the paradigm of algorithm, but the prediction is that it eventually will (although an algorithm can probably be written so that it takes into account religion).

292  Markku Suksi European Union with 24 official languages much more easily manageable and perhaps facilitate world government in matters that require global solutions.15 The paradigm shift from religion to language as the basis of legitimacy caused an important change in the way in which norms were made by opening up the non-transparent ways of law-making or limited transparency of law-making that existed until the 19th century and improving their transparency so as to make the transparency of law-making very broad by the 2000s. This development is, of course, tied to the increasing participation of population in law-making, but it is submitted here that such participation was dependent on language as the basis of technique of governance. At this moment, it is not possible to know exactly how the shift from language to algorithm will affect transparency, but it is entirely possible to envision two scenarios: limited transparency, if the algorithm is closed to general access, and broad transparency, if the algorithm is open to general access (see figure 2 below).16 At the same time, the governance technique that is used may display different degrees of rigidity in relation to the existing norms. For instance, during the 17th century, norms that had connection with religion were very rigid and inflexible, because the entire society was based on the premise that a country follows a particular religion. In such a situation, the distance between the morals prescribed by the religion and the applicable law was very narrow. The situation changed with the paradigm shift from religion to language, which made it possible to incorporate such norms in legislation that were not prescribed by religion and were thus less rigidly controlled by religion. In such a situation, the distance between morals prescribed by religious conceptions and the applicable law could be wider and the law more flexible in its approach to different issues. 15   For a prognosis on how the paradigm shift might impact the lawyers and their profession in the 2020s, e.g., by facilitating on-line dispute resolution, see Richard Susskind, Tomorrow’s Lawyers: an Introduction to Your Future (OUP 2013). 16   It is evident that different areas of law will be differently affected by an increasing development into a world of algorithms. For instance, human rights law, which emerged in the aftermath of the Second World War and which emphasizes the rule of law from the point of view of language as the governance technique, may face unexpected challenges (but perhaps also certain opportunities) by the increasing use of the algorithm as a governance technique. If the algorithm remains closed (see below, figure 2), the root of decision-making in society will be non-transparent and lead to a situation where accountability for governmental action and for the legitimacy of legislation may be difficult to realize. Therefore, there is reason to consider the impact of the rule of algorithm on the human rights discourse and to develop an adequate response.

On the openness of the digital society: from religion via language … 293 The question is now the manner in which the on-going paradigm shift from language to algorithm will affect transparency, on the one hand, and the relationship between conceptions of what the law should be and what the outcome of legislative procedures is, on the other? How is the use of algorithms going to affect the fundamental structures our societies are based upon? No clear answer is available as of yet, but depending on whether the algorithm is open or closed, it could be suggested that at least three alternatives exist. The first one, premised on the closed or non-public nature of the algorithm, results in a positioning that is to some extent similar to the situation in a religion-based governance technique (without, of course, suggesting that the algorithm would be a religion). The second one, premised on the open or public nature of the algorithm, results in a positioning that is to some extent similar to the situation in a language-based governance technique. The third alternative is one that might not be possible to formulate as of yet, but its existence could be presumed on the basis of the fact that the other two governance techniques have developed their own approaches to issues of transparency and translation of morals to law. Figure 2: Algorithm as a governance technique. Broad Algorithm open

Language

Transparency in the creation of law

Religion

Algorithm closed

Limited Inflexible Flexible Degree of rigidity in the relationship between governance technique and existing norms

294  Markku Suksi Figure 2 invites the speculation that according to the two dimensions we can imagine currently, our approach to the degree of openness of the algorithm may shape our societies in crucial ways. If the choice is a closed and non-public treatment of the algorithm, we may be looking forward to a future society that is less transparent and more inflexible than the one we currently live in. In such a situation, our future might contain a return to a similar opaque governance technique as was the case with religion. However, if the choice is an open algorithm, we may be able to preserve the relatively broad transparency and flexibility that is characteristic of current society.17 It is, however, entirely possible that the algorithm-based society will develop an approach of its own that cannot be completely envisioned in our two-dimensional sketch.

3.

The Current “Non-Regulation” of the Algorithm

It appears as if the Finnish legal order did not contain, during the Fall of 2016, any provisions that specifically relate to the concept of algorithm. This means that from the point of view of the law, an algorithm written for a public authority is subject to regular rules concerning property, in particular intellectual property. The author of the algorithm, which may be a private individual, but more often a business enterprise specialized in software production, owns the algorithm until it is bought by the user or delivered to the user by the programme developer. Public authorities may engage in in-house software production, in which case the programme becomes intellectual property of the employer (such as the state), but they may also often purchase the programme from an external programme developer pursuant to the rules of public procurement.18 The programme developer, in delivering the software, may be unin17   It appears that at the moment, the algorithm is closed, at least at a general level. Therefore, it is easy to agree with Pasquale (2015) 92: “It does not follow, however, that doing nothing is the preferable option. We need to revive regulation, not give up on it. Internet service providers and major platforms alike will be a major part of our informational environment for the foreseeable future. The normative concerns associated with their unique position of power are here to stay. A properly designed regulatory approach may do much to clarify and contain the situation; without one, [we –MS] will deteriorate.” 18   However, in Finland, the Social Insurance Institution uses its own in-house programme devel-

On the openness of the digital society: from religion via language … 295 terested in delivering the algorithm, or at least it might be uninterested in making the algorithm public, because in that case, all other software developers and users would know how the algorithm and the software is constructed, which could, as a consequence, reveal the business secrets of the software producer or some other features that might make it possible for the public to misuse the electronic decision-making procedure.19 According to an interlocutor in charge of procurement of software to a governmental agency, the procurement contracts try to make the point that the algorithms should be released for publicity, but in practice, the software producers have not been willing to release the algorithms. This means that the public (or rather the experts within the public) do not have any possibility to review the operation of the algorithm. Also, releasing the algorithm could make it difficult or even impossible for the original author of the software to monopolize the particular programme so as to tie the public authority using the programme to the same software producer. In addition, such “compartmentalization” of algorithms leads to a situation where the software of different producers may be incompatible with other software or difficult to make compatible with other software. The openness of the algorithm has at least to some extent been discussed from a compatibility perspective in relation to legislative activities in Finland. In the Government Bill 153/1999 containing a proposal for an act on electronic access to administration, the Government of Finland discussed, inter alia, electronic signatures and expressed the view that electronic signature would be based on a procedure that is reviewable in the public sphere.20 According to the Government, this can take place on the condition that the algorithm used for encryption is generally known. Because encryption is based entirely on the strength of the algorithm that is used and on the secrecy of the encryption key, the Government concluded that it is necessary that the algorithm is public so that the users can ascertain

opers to establish software for the SII. This means that the algorithm remains in the ownership of the SII, which is an independent public authority with its own legal personality. 19   An interlocutor at the Finnish Social Insurance Institution was of the opinion that the openness of the algorithm would not be possible for the reason that it would reveal the monetary level of the claims below which all decision-making concerning a benefit is automated and where only a certain random sample is checked for correctness. 20   Regeringens proposition till riksdagen med förslag till lag om elektronisk kommunikation i förvaltningsärenden (RP 153/1999).

296  Markku Suksi that the encryption key has a sufficient level of strength.21 The Government thought that a sufficient level of publicity is a description that is so accurate that a specialist can create, on the basis of the description, another programme that is compatible with the relevant encryption programme. According to the Government, this requires that standardized algorithms are used in encryption. Current EU law is at least moderately negative about the use of automated procedures of decision-making. According to Art. 12 (a) on the right of access of the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, “Member States shall guarantee every data subject the right to obtain from the controller […] knowledge of the logic involved in any automatic processing of data concerning him at least in the case of the automated decisions referred to in Article 15 (1)”.22 The reference to “logic” appears to open up the possibility of access for the party to the operation of the algorithm. Article 15 concerning automated individual decisions introduces, on its part, a significant hurdle for automated decision-making by stating in para. 1 that “Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing 21   From a general point of view, a maxim relevant for the design and implementation of algorithms appears to be that all algorithms and their implementations should be seen as being faulty unless proven correct. A particular problem seems to be that several inter-connected algorithms that separately produce correct results may interact within the relevant system in a manner that produces errors. To prove the correctness of the inter-operability of algorithms is a problem at an entirely different level of complexity and may escape public review already for that reason. Therefore, it could be expected that an algorithm-based system for decision-making is modularly organized so that the different algorithms can work properly with each other, but that in itself already requires that the construction of the various algorithms is open. As pointed out by Cecilia Magnusson Sjöberg, ‘Digital informationshantering i offentlig verksamhet’ in Cecilia Magnusson Sjöberg, Rättsinformatik: Juridiken i det digitala informationssamhället (Studentlitteratur 2016) 310, the right to complain is essential in relation to automated decision-making, because it is in principle not possible to design completely error-free processes for automated decision-making. 22   A reference to the term “logic” is also present in the preambular para. (41): “Whereas any person must be able to exercise the right of access to data relating to him which are being processed, in order to verify in particular the accuracy of the data and the lawfulness of the processing; whereas, for the same reasons, every data subject must also have the right to know the logic involved in the automatic processing of data concerning him, at least in the case of the automated decisions referred to in Article 15 (1); […].”

On the openness of the digital society: from religion via language … 297 of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.”. It appears as if at least some Member States have understood this provision as a prohibition of automated decision-making.23 At the same time, however, the provision of the directive allows for some exceptions by stating that “Member States shall provide that a person may be subjected to a decision of the kind referred to in paragraph 1 if that decision […] is authorized by a law which also lays down measures to safeguard the data subject’s legitimate interests”. Hence automated decision-making is prohibited only if it is not authorized by a law. This directive is repealed as of 28 May 2018, when Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) will become effective in all EU and EEA countries. Essentially, most material provisions of the directive remain in force,24 but in the form of directly applicable EU law. Automated decision-making and profiling based on special categories of personal data should be allowed only under specific conditions. In the preambular paragraph 71(2), the regulation makes reference to the fact that “[a]utomated decision-making and profiling based on special categories of personal data should be allowed only under specific conditions”. Such specific conditions are spelled out in the operative provisions of the regulation, such as Article 22 on automated individual decision-making, including profiling, where the EU regulation creates a right for the data subject “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. However, this shall not apply if the decision is, inter alia, “authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests”. Hence this leaves a possibility to the individual to opt out from algorithm-based deci23   See, e.g., Regeringens proposition till riksdagen med förslag till personuppgiftslag och till vissa lagar som har samband med den (RP 96/1998) 27, where explicit reference is made to the fact that the directive contains, as the main rule, a prohibition of automated decision-making, which should be implemented by the Member States. 24   See the preambular paras. 63 and 71 and Articles 13 (2) f, 14 (2) g, 15 (1) h, and 22.

298  Markku Suksi sion-making in cases where the decision-making is not established in provisions of the law, but contains no such possibility where EU or national law contains a specific provision that creates automated decision-making for the particular decision in question. There is therefore a requirement of law underlying automated decision-making. From the point of view of transparency and openness of government in automated decision-making, the issue here is the scope and extent of the term “logic” in Article 12(a) of the Directive. Does the term “logic” mean the same thing as the “algorithm” or does “logic” mean that the method of decision-making is generally explained, without giving publicity to the algorithm itself? So far, the term “algorithm” has not been mentioned in any judgment of the EU courts, only in an opinion of an Advocate General,25 but not in a context that would be of relevance for the openness of the algorithm (except that it is implied that it is possible to issue a patent for an algorithm; a patent is public information and thus not an obstacle to the possibility to know the operation of the programme). Also, the term “logic” has so far not been dealt with by the EU courts in the data protection context.26 If “logic” equals “algorithm”, then the individual should have the right to know how the algorithm is constructed in order to assess whether the norm on the basis of which the automated decision-making was carried out was correctly applied in his or her individual case. This is only fair with a view to the fact that the algorithm actually makes an exception to the law on administrative procedure, and this is why it is important to regulate the use of automated decision-making procedure in law by a requirement that the use of automated decision-making procedure is established in law for it to be valid from the point of view of data protection. In Finnish law, a definition of automated decision can be found in Section 31 of the Act on the Protection of Personal Data (523/1999), according to which such a decision is a decision by means of which an assessment is carried out about certain features of the registered person, which is only based on an automated processing of data and which has legal conse25   See Opinion of Advocate General Jääskinen, delivered on 25 June 2013 Case C‑131/12 Google Spain SL Google Inc. v Agencia Española de Protección de Datos (AEPD) Mario Costeja González [2014]. The judgment itself by the ECJ did not make reference to the term “algorithm” or to the term “logic”. 26   But see Joined Cases C-317/04 and C-318/04 European Parliament v Council and Commission, Grand Chamber Judgment of 30 May 2006 [2006] ECR I-4795, where Article 12 of the Directive, including the term “logic” is quoted.

On the openness of the digital society: from religion via language … 299 quences for the registered person or which otherwise affects him or her in a significant way. Section 31 of the Act permits the making of automated decisions only under a few conditions, one of which is that an Act provides for the automated decision-making. According to the Government Bill,27 an automated decision is defined as a decision that aims at the assessment of certain qualifications of the registered person, made solely by means of automatic data processing, and that has legal effects for the registered person or effects that otherwise significantly affect the said person. The automated decision is, under this assumption, produced without any human involvement. The Government Bill states that the point of departure of the EU-directive behind the provision in Finnish law is that such automated decisions are forbidden, although the directive places a duty on the Member States to legislate on the permissibility of automated decisions in situations established in art. 2 (a) and (b). One of these is the requirement of provisions in law, which in Finland would normally be interpreted as provisions in formal legislation adopted by Parliament. In addition, the above-mentioned Data Protection Act requires in Section 36 that the controller of the register must notify the Data Protection Ombudsman of Finland about the taking into use of such an automated decision-making system referred to in Section 31.28 According to Section 37, the notification must contain information about the logic used in the automated decision-making system. By October 2016, around 25 relevant notifications have been made, most of which deal with automated decision-making systems of financial operators by which selection of customers is carried out or with automated decisions concerning credit and insurance claims.29 It is noteworthy in the context that public authorities have not filed any notifications.30 In addition, the Government Bill on the Data Protection Act makes the point in relation to Section 26 that the registered 27   Regeringens proposition till Riksdagen med förslag till personuppgiftslag och till vissa lagar som har samband med den (RP 96/1998) 68, 74. 28   According to the Government Bill, the notification would not have any legal effects or implications, but would be necessary to introduce in order to follow up the development of the use of automated processes of decision-making, because the experiences concerning such decision-making are so limited (this was the case in 1998). 29   Reply from the Data Protection Ombudsman of Finland, Dnro 2834/41/2016, 13.10.2016. Altogether seven notifications deal with credit systems and twelve have come from insurance companies. 30   However, such a notification has no legal consequences under Finnish law, but is only a requirement by the law for informational purposes.

300  Markku Suksi person must be secured the right to receive from the controller of the register information about the logic of the automatic data processing at least concerning automated decisions in single cases. Furthermore, the Government Bill repeats this by stating that the registered in such situations should have the right to receive information about the principles of operation of automated decision-making. In Swedish law, similar provisions can be found in Section 29 § of the Act on the Protection of Personal Data (1998:204) concerning automated decisions. If a decision that has legal consequences for a physical person or otherwise significant consequences for the physical person and is based on entirely automated processing of such personal data that is aimed at assessing individual qualifications of a physical person, then the person affected by the decision shall, according to the provision, have the possibility to get the decision reviewed by a person. Furthermore, everyone who has been the object of such a decision has the right to receive information upon application from the controller of the personal data on what it is that has steered the automated processing that has led to the decision. Because the algorithm replaces the administrative procedure normally carried out by civil servants and because mistakes in ordinary administrative procedure are normally appealed to a court of law, a likely way to find out whether the algorithm is public is through complaints procedures in individual cases. Of course, some complications in administrative process vanish with automated decision-making, such as conflict of interest issues, while some due process issues may be handled with greater precision and accuracy than by a human being, but because an algorithm steering the automated decision-making process is as perfect as its software producer, there is the chance that one or several mistakes or “bugs” remain in the programme until tested in relation to individual cases. One way of testing the outcome of automated decision-making programmes is by means of complaints procedures, which should always be available. When such programming mistakes are detected and eradicated, the programme is perfected and might give less or no reason for complaints, which also would relieve courts of law from a certain case-load. However, the first case or first cases that are needed to identify programming mistakes might also be testing grounds for the transparency and openness of the algorithm. What is the reply of the legal order to the paradigm shift or to the era of the algorithm in the issue of the openness of the algorithm? The mat-

On the openness of the digital society: from religion via language … 301 ter does not seem to be regulated at the level of international law.31 It appears as if the EU law were still moderately negative about the phenomenon of automated decision-making, although it opens up the logic of the automated decision-making procedure for review at least for the person affected by such decision-making. This would seem to mean that the immediate party has a right to know the logic, whatever that may mean, while the right would not be enjoyed by everyone, such as persons who are not (yet) involved in automated decision-making procedures as parties but who might want to find out what sort of a decision the automated system produces. Most applications of the concept of the algorithm in law are currently to be found at the national level, where the phenomenon is probably mainly dealt with by means of general legal concepts such as ownership and intellectual property, if at all. To some extent, the use of algorithm-based decision-making is regulated in specific legislation, but after some passage of time, when the issues become more refined, it is likely that a legal order will pass rules of a more general nature with regard to algorithm-based decision-making. What might be awaiting at the end of the development of national rules concerning algorithm-based decision-making is the creation of constitutional norms about the phenomenon, potentially fashioned as a constitutional right of the individual concerning the openness and transparency of the algorithm. In that case, the regulation of the algorithm and its transparency would have undergone the same evolution as the combination of language and transparency. The proposition that the algorithms used for automated decision-making by public authorities should be transparent and public and thus accessible to all is by no means new. Already at the beginning of the 1990s, the point was made that the algorithm might be regarded a normative decision by the governmental authority setting up an automated decision-making system, and as a consequence, at least under Swedish law, such a normative decision should be public.32 More recently, the idea of “code as law” has 31   For instance, the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data of 28 January 1981 (CETS No. 108) and the Additional Protocol to it from 2001 (CETS No. 181) do not seem to regulate the use of algorithm or automated decision-making in individual cases. 32   For an early analysis of automated decision-making in the state administration of Sweden, see the interesting contribution by Cecilia Magnusson Sjöberg, Rättsautomation – Särskilt om statsförvaltningens datorisering (Norstedts Juridik 1992), in particular 79–80, 112 and 184 (where the author makes the point that external control could take place by means of complaints mecha-

302  Markku Suksi been presented,33 and the consequence of such a position would also be that the algorithm should be publicly known and knowable to all. The starting point of these arguments is that the algorithm or the programme code could play a normative role and that normative principles should be developed that could be used as yardstick for assessing the operation of various algorithms from a legal perspective, because algorithms regulate and guide human behaviour and create often a default position from which the individual cannot escape.34 From the point of view of public administration, the question of whether the algorithms or codes underlying automated decision-making should be public is thus highly relevant and can perhaps only render one answer: they should be public. However, the follow-up question dealing with whether the algorithm or code used for entirely private purposes, such as business operations, should also be public may be another matter, but will not be dealt with in this context.35 The provisional proposal nisms), 181–184, 350 (where the author makes the point that the algorithm should perhaps be published in the statute books as any set of public regulations), 367, 456, 480, 502–503, 508–510. See also Dag Wiese Schartum, ‘Law and algorithms in the public domain’ [2016] Etikk i praksis. Nord J Appl Ethics 15 (hereinafter: Schartum 2016a): “An important part of this process is embedded legal decision-making that may be seen as ‘hidden’ quasi-legislation, representing processes and decisions that are only recognized and graspable by the few initiated.” See also Schartum (2016a) 18–23. 33   See Marco Goldoni, ‘The Politics of Code as Law: Toward Input Reasons’, in Anna-Sara Lind, Jane Reichel & Inger Österdahl (eds), Information and Law in Transition – Freedom of Speech, the Internet, Privacy and Democracy in the 21st Century (Liber 2015) 115–133, and the references made therein to works of other authors. However, at 124, Goldoni accounts for the thoughts of Bert-Jaap Koops, who places transparency of the code only on a secondary level of criteria among some other issues, regarding it as a lower criterion in relation to primary criteria, such as human rights, moral values such as autonomy and dignity, and procedural values such as democratic decision-making and inclusive participation. In our view, however, transparency of the code could well be placed amongst the primary criteria. In fact, at 125, Goldoni accounts for the thoughts of Lon Fuller’s criteria on the assessment of code as law, and that set of five criteria indicates that algorithms should actually be accorded a position of importance in this respect and arrives at that conclusion in his thoughts on input criteria of legitimacy. 34   As pointed out by Goldoni (2015) 121, an individual might not be able to choose whether to obey a rule or not, because “code may not leave any possibility of choice”. “Besides, code can provide for perfect enforcement, leaving no room for breaking (or disobeying) a normative rule.” See also Goldoni (2015) 128, 130 (where he thinks normative criteria and possible constitutional safeguards should be extended from the area of public authorities to private development of regulation in the form of code). For three different legislative strategies regarding the creation of the algorithm and automated decision-making, see Schartum (2016a) 20–23. 35   See Pasquale (2015) 10, 17, 19–58, where he presents the point that the increase in algorithmic decision-making may fade away the distinction between state and market, that is, between public and private decision-making and thinks that also private algorithms, used by business corporations

On the openness of the digital society: from religion via language … 303 is, however, that also algorithms used by private entities for automated decision-making should be public.

4.

Use of Algorithm-Based Decision-Making in Finland and Sweden

It appears that in Finnish public administration, automated decision-making has been proliferating during the past few years. At least within the Social Insurance Institution, this has very recently lead to the understanding that an overview of the automated processes of the SII should be produced in order to facilitate further analysis, something that happened during the summer of 2016. The taxation procedure has already over a good number of years appeared, at least to the tax payer, as a system where automated procedures of decision-making are used.36 In Swedish law, at least the congestion tax is levied in an automated process, and the likelihood is great that several other decisions by public authorities are made by means of automated decision-making to an extent that may make Sweden a front-runner in this respect.37 Automated decision-making is working on the basis of an algorithm that connects the legal norm with such information requisite for making the decision that can be supplied in the electronic form without any human involvement so as to produce legal effects. Automated decision-making aims at producing an implementable decision, undersigned (in Finland) by the public authority on the basis of Section 16 of the Act on Electronic Communication with the Administration (13/2003), which provides for a public authority the opportunity to sign a decision, that is, the document of to perform surveillance of individuals as concerns their reputation, search and finance, should be transparent and under public oversight. Because the creation of such mechanisms would interfere in the right of property and of business in a serious manner, regulation to that effect should probably involve rules at the level of the constitution and detailed rules in ordinary legislation. See also Pasquale (2015) 140–147, for a discussion on the spectrum of disclosure or transparency that could serve as a point of departure for legislation on the openness of the algorithm. 36   For an early analysis of automated tax administration in Finland, see Jorma Kuopus, Hallinnon lainalaisuus ja automatisoitu verohallinto (Lakimiesliiton kustannus 1988) 30, 119, 182, 220, 265, 313, 372, 493, 534, where he identifies essential problems, although he does not seem to deal with the openness of the algorithm. 37   See Magnusson Sjöberg (1992) 101, 251, 329 f., 344–348, 360, 375, 378, 382, 430, 487, 496–497, 504–505, 527.

304  Markku Suksi decision-making, electronically.38 For this reason, it is possible to ask which norms in law have been connected with automated processes and whether the automated decision-making processes at, for instance, the National Board of Taxation of Finland, the Social Insurance Institution of Finland, and the National Board of Traffic of Sweden fulfil the requirement of regulation in national law of such automated processes? The 2005 Government Bill proposing amendments to the Act on the Taxation Procedure (1558/1995)39 contains the explicit aim to use automated decision-making in Finland as much as is possible on the basis of information gathered from the person under the duty to pay taxes and on the basis of income and tax deduction information received gathered from elsewhere. The explicit intention of the Act is that that tax information of all those under the duty to pay taxes is controlled by means of an automated process. In addition, the tax authorities use selection criteria through which the automated process takes out a part of the materials,40 such as 30 per cent, as mentioned in the Government Bill, for a more detailed control in a case-by-case manual process, which ensures an equitable application of the selection criteria over the entire country. According to the Government Bill, general grounds for the selection criteria would be the nature and extent of the matter as well as the equal treatment of those under the duty to pay taxes and the needs of tax control. The Act on the Taxation Procedure does, however, not contain any information on how the automated decision-making process concerning taxation is operating or how the selection criteria are used, only a number of provisions in Sections 15 through 24 on the duty of third parties to give information relevant for taxation to the tax authorities. According to the Government Bill, the information about the selection criteria would be covered by the secrecy grounds of Section 24, para. 15, of the Act on the Public Nature of Information with the Public Authorities (621/1999), where secrecy is made possible for documents that contain information about a control or other matter containing the checking of cir38   On electronic communication with administration, see Heikki Kulla, Förvaltningsförfarandets grunder (Talentum 2014) 14, 64–73. 39   Regeringens proposition till Riksdagen med förslag till ändring av vissa bestämmelser i anknytning till beskattningsförfarandet (RP 91/2005) 6, 14. 40   Regeringens proposition till Riksdagen med förslag till ändring av vissa bestämmelser i anknytning till beskattningsförfarandet (RP 91/2005) 18–19.

On the openness of the digital society: from religion via language … 305 cumstances. The Government Bill states that the tax control and its object would be endangered if information would be given about the selection criteria, which would be against a compelling public interest. Therefore, the selection criteria are to be held secret to the extent the Act presupposes.41 This, of course, is interesting from the point of the algorithm, because it can be maintained that the selection criteria are part of the algorithm on the basis of which the taxation system is operating. If the selection criteria are secret, as a consequence also the algorithm is probably secret, because revealing the algorithm would at the same time reveal the selection criteria. An outstanding issue here is whether the explanation in the Government Bill is sufficient to satisfy the requirement of explanation of “logic” in the EU law or whether the logic of the automated procedure should be opened up in greater detail, either in the Government Bill or the Act? In the automated decision-making, tax authorities do not control each case as to the correctness of the incomes and deductibles or grounds and amounts affecting the taxation of the person. To the extent the individual tax case is not selected to be controlled in a manual procedure, the taxation decision is made in an automated procedure. However, in order to prevent fraud and in order to enforce the strict observance of the duty to declare and file tax information, the tax authorities select by means of random sampling a group of persons with the duty to pay taxes for whom all components of taxation are separately controlled. Hence only a smaller portion of the tax decisions are made by a physical person, a civil servant. According to information from the National Board of Taxation of Finland, although taxation takes place digitally and in an automated process, the process itself does not fulfil the criteria of automated decision-making: all those with the duty to pay taxes receive a pre-filled tax form (the so-called taxation proposal), which contains the information that the taxation proposal is based on, as collected by the tax authorities from different registers and from employers. The person to be taxed may react to the tax proposal before a set date, but if the person has not reacted by the established deadline, the tax proposal is converted into a taxation decision on the basis of which the possible tax bills are issued for payment or – as the case may be – the tax authorities pay back excess taxes to the person. Therefore, although 41   Regeringens proposition till Riksdagen med förslag till ändring av vissa bestämmelser i anknytning till beskattningsförfarandet (RP 91/2005) 19.

306  Markku Suksi the taxation procedure may be automated only to a larger part but not entirely, it would seem to fulfil the application criteria in Article 2(1) of the General Data Protection Regulation of the EU that the regulation “applies to the processing of personal data wholly or partly by automated means”. As a rule, the Finnish Social Insurance Institution uses a certain backdrop of automated processes in all of its decision-making, including in decisions made manually by a civil servant.42 The new handling system of the SII is automated concerning many of its components, such as concerning billing and payment to pharmacies for vouchers.43 The automated processes deliver a large amount of information about the applicant of a benefit, both from registers of the SII (benefits already granted or denied, etc.) and from, inter alia, the Population Data Centre (e.g., home municipality, age, language for the purposes of contact in Finnish, Swedish or Sami, etc.).44 Electronic handling of a matter facilitates a holistic approach by indicating the impact of the benefit about to be decided on other benefits that the person may already enjoy. An increased use of income registers makes possible a further automatization of decision-making concerning several benefits.45 Consequently, the SII has a relatively large number of entirely or almost entirely automated decision-making procedures (although an interlocutor expressed the opinion that the automated procedures outlined below give the impression of a much more complete an automated system than what the system is in reality). Examples on automated decision-making at the SII contain reimbursement of travel expenses,46 direct payment of health care and taxi journeys,47   The facts about the use of automated decision-making procedures at the SII stem from a PowerPoint presentation kindly delivered to the author of this piece by Mrs. Heli Kauhanen, Unit Leader at the SII (hereinafter: Kauhanen PPP 2016). 43   Kauhanen PPP 2016. 44   Each public authority has continuous on-line access with the databases of the Population Data Centre and they have, in fact, a duty under the law to use the information of the databases of the PDC. In addition, the SII has, according to Chapter 19, Section 10, of the Sickness Insurance Act (1224/2004) the right to open up technical interfaces with other registers and to receive even such information detailed in the Act that otherwise would be secret under Section 24 of the Act on the Public Nature of Information with the Public Authorities (621/1999). 45   Kauhanen 2016 PPP. 46   Decision by the SII according to Chapter 4 of the Sickness Insurance Act (no reference in the Act or the Decree to automated procedure). According to Kauhanen PPP 2016, the automated decision concerning the benefit is produced on the basis of an electronic application submitted by the person. A greater part of the applications is channeled to the automated process, while applications displaying certain criteria are transferred to manual procedure. 47   Kauhanen PPP 2016. 42

On the openness of the digital society: from religion via language … 307 internet-based ordering of the European Health Insurance Card,48 payment of unemployment allowance, control of unemployment allowance and termination of unemployment allowance,49 conscript’s allowance,50 granting of government guarantee to study loan,51 government guarantee for adult education loan,52 study grant and housing grant to students,53 transfers to 48   Decision by the SII according to Section 15 of the Act on Transborder Social and Health Care (1201/2013) (no reference in the Act or the Decree to automated procedure). According to Kauhanen PPP 2016, the European Health Insurance Card is ordered via the internet. The order is channeled directly to a database at the SII that contains the orders that will be sent to the producer of the cards. No measures are required from the handling civil servant, the order does not appear on the work row, and the orders are thus not emerging as matters for manual handling. 49   An unemployment allowance is initially granted on the basis of a written application that is manually handled, as established in Chapter 5, Section 1, and Chapter 11, Section 1, of the Act on Unemployment Allowance (1290/2002), but the Act does not contain any provision on automated procedures (and there does not seem to exist any Decree about the matter). According to Kauhanen PPP 2016, automated procedure applies thereafter, inter alia, for the payment of the benefit, control of the benefit and termination of the benefit (e.g., for reasons of re-employment or reaching the age of 65). More specifically, 51.7 % of the payments of the benefit are done automatically on the basis of an electronic notification by the person over the internet concerning the term of unemployment (eTT2). Automatic procedure is used for the control of the benefit when the term of 500 days of the basic benefit is full and it is possible to grant a so-called labour market subsidy, and when the term of 200 days of the supplement for active period is full. Automated procedure is used in the termination of the benefit when the individual submits an electronic notification over the internet about re-employment or when the automated system concludes, on the basis of the social security number of the person, that the person enjoying the benefit has reached the age of 65. According to Section 3 of the Government Decree on Public Manpower Services (1344/2002), an individual is, as a rule, registered as a job-seeker via an electronic registration. 50   A so-called conscript’s allowance can be granted by the SII on the basis of the Act on the Conscript’s Allowance (781/1993) to a wife, husband, or partner of a person carrying out his or her military service or alternative service or the voluntary military service of women. There is no mention of automated procedure of decision-making in the Act. According to Kauhanen PPP 2016, the SII receives information directly from the Defence Forces about termination of military service, whereupon the conscript’s allowance to the partner is automatically terminated. 51   On the basis of Section 8 of the Act on Study Grants (65/1994), the SII is the public authority in charge of deciding on study grants, including the government guarantee of study loans on the basis of Section 15. According to Kauhanen PPP 2016, the government guarantee is renewed annually for the next academic year on the basis of the existing grounds of original decision. 52   A government guarantee for loans for adult education may be granted on the basis of Section 13 of the Act on Support for Adult Education (1276/2000) and Section 23 of the Act on Study Grants (65/1994). According to Kauhanen PPP 2016, a decision concerning such a government guarantee is made in an automated process on the basis of an application that the individual has submitted over the internet. 53   A crediting of the study loan can (and a diminishing of the study loan, now repealed, could be done) on the basis of Sections 15 b through 15 e of the Act on Study Grants (65/1994). According to Kauhanen PPP 2016, almost all decisions are made in automated process. The SII receives

308  Markku Suksi pharmacies,54 insuring a person in the SII system,55 and child benefit that has been applied for electronically.56 In addition, automated procedures are used concerning individual decisions on municipal supplements to child day care allowance, adjustments of indexes for different benefits and adjustments due to changes in legislation, and the plan is to extend automated decision-making also to the regular social allowances.57 The SII makes, according to the relevant Act, decisions about these benefits. However, the relevant legislation does not seem to contain any provision at the level of an Act of Parliament concerning automated procedures of decision-making, nor in the Decree on Sickness Insurance (473/1963) or other relevant decrees. Interestingly, several of the Acts contain a right information about the day of graduation from the educational establishments and the information about study loans from the banks, and on the basis of these pieces of information, a decision can be made for the greater part of individuals in an automated process. In fact, there are several situations where automated processes are used to control study grants, such as control of study grant on the basis of study information received from the educational establishments (registration for active studies, graduation, termination) once every month; control of study grant against the confirmed tax information of the parents once a year; control of study grant against the income of the student in the income control once a year; control of study grant in certain situations of change; control of study grant on the basis of information given over the internet concerning advance termination of study grant; control of rent subsidy against information received from student housing corporations once a month; and termination of rent subsidy in a part of the cases on the basis of information received from student housing corporations in periodic control of rent subsidy. 54   On the basis of Chapter 15, Section 9, and Chapter 19, Section 5, Sub-Section 1, para. 5, of the Sickness Insurance Act, direct transfers are made to pharmacies of the public compensation for the medicines an individual is entitled to receive compensation for. According to Kauhanen PPP 2016, the pharmacy controls the right of the customer to direct compensation from the service of direct compensation information. During 2015, the pharmacies made around 40 million queries to the service. After the purchase, the 810 pharmacies of Finland send information about the purchase of medicine to the reception service of medicine purchases and transactions, and during 2015, more than 27 million purchase notifications and some 10000 monthly transaction notifications were sent by the pharmacies to the service. When the compensation ceiling is reached, most of the notification letters are mailed automatically, but a part of the ceiling matters are taken up for manual handling. 55   According to Kauhanen PPP 2016, the insurance information of persons who have been abroad and who return to Finland is updated automatically. The address and benefit information of such persons is controlled by a programme and the persons are automatically updated as persons who have returned to Finland. 56   The SII is, according to Section 3 of the Act on the Child Benefit (796/1992) in charge of deciding on the granting of the child benefit. According to Kauhanen PPP 2016, a child benefit that has been electronically applied for is decided in an automated process from February 2017 on. However, a portion of the child benefit decisions are complicated in a way that does not make it possible to use an automated decision-making procedure for all of the decisions. 57   Kauhanen PPP 2016.

On the openness of the digital society: from religion via language … 309 for the SII to receive necessary data from other public authorities, even in the case the data is under the secrecy provision of the Act on the Public Nature of Information with the Public Authorities. In addition, the legislation concerning the electronic registers of the SII provide for a possibility for the SII to transfer its information to other authorities. Also, there exist explicit provisions on technical interfaces for supposedly electronic systems between the SII and other public authorities that facilitate the transfer of information. This is the case, for instance, in Chapter 19 of the Act on Sickness Insurance and Sections 41(a) through 43(a) of the Act on Study Benefits (65/1994). The infrastructure in terms of electronic mechanisms necessary for automated decision-making is thus in place and such decision-making is also used to an increasing extent, but the phenomenon is not explicitly outlined in the legislation concerning the benefits managed by the SII. When asked from an interlocutor at the SII whether algorithms underlying the automated decision-making would be public, the reply was that they are not public and that, by way of example, there is no reason to inform the public about the frequency of random samples in cases of payments where amounts less than, for instance, 50 euros are not controlled, because such information could lead to misuse of the system.58 Against this background, with examples from two public authorities, the Tax Authority and the Social Insurance Institute, Finnish law appears to be lacking specific provisions for decisions that can be made in an automated process, and it is possible that the situation is at least somewhat similar in other countries.59 For that reason, at least legally, the entire Finnish Act on Administration (434/2003) should apply to automated decision-making. In practice, however, it seems that several provisions of the Act on Administration may be affected by the transfer of decision-making from traditional manual handling to automated procedure. Such provisions could preliminarily be, for instance, sections 6 on the general administrative principles (equality, misuse of power, objectivity, proportionality), 12 on counsel (and as a consequence section 13 on secrecy of communication); 14 on the pro  Pasquale (2015) 107, 153, describes this as gaming of the system by the users.   For examples from Norway, see Schartum (2016a) 19 (with footnote information), mentioning the housing benefit system from 1972 as the first fully automated legal decision-making process in government administration without any elements of human assessment. See also Dag Wiese Schartum, ‘From Algorithmic Law to Automation-friendly Legislation’, published by Society for Computers & Law in 2016 (hereinafter: Schartum 2016b), at http://www.scl.org/site.aspx?i=ed48414 (accessed on 21 December 2016). 58 59

310  Markku Suksi cedural status of a mentally incapacitated person (and as a consequence, section 15 of the representative of the incapacitated to represent the person); parts of section 22 on completing the documentation; 26 on interpretation and translation; 27 on conflict of interest (and the grounds for such conflict in section 28); 34 on hearing of the party; 35 on the hearing of the representative of an (incapacitated) party; 38 on inspection; 39 on control visit; and 40 on oral hearing of witness.60 These provisions would, in practice, be set aside by default at the moment an automated decision-making system is brought into operation, even when there is no provision about such decision-making in an Act of Parliament or a decree of government. In Swedish law, several examples of more explicit provisions concerning automated decision-making exist, namely the traffic congestion tax, imposition of customs duties, tax on real property, and decision on old-age pension. Section 2 of the Swedish Act on the Traffic Congestion Tax (2004:629) identifies the National Board of Traffic as the public authority that decides, on behalf of the National Board of Taxation, on the imposition of the congestion tax in individual cases. This takes place by means of an automated procedure on the basis of information in the road traffic register. In practice, this means that toll cameras make photographs about the registration number of each vehicle passing through and automatically charges the tax once per month to the owner of the vehicle. No civil servant is physically involved in this automated process, except in the case the owner of the car decides to start a review process concerning the automated decision on the tax. The Act on Traffic Congestion Tax makes reference in Section 3(a) to the Act on Administration (1986:223) and provides that Sections 26 and 27 of the Act on Administration shall not be applicable on decisions of the National Board of Traffic according to the Act on Traffic Congestion Tax. The former provision deals with the correction of a writing error and the latter by general review of the administrative decisions, for which there are special provisions in the Act on the Traffic Congestion Tax, Sections 14(a) through 15(b). Hence the Swedish law-maker has recognized the 60   In addition, as noted by Pasquale (2015) 9, 20, 27, 35, 38–40, 61, 213, automated decision-making does not necessarily eliminate discrimination, although it looks neutral, but may in fact facilitate subtle forms of discrimination. On this issue, see Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Semel, ‘Fairness through Awareness’, in arXiv:1104.3913v2 [cs. CC] 29 Nov 2011, (https://arxiv.org/pdf/1104.3913v2.pdf, accessed on 20 January 2016), 1–2, where the authors advocate that the metric underlying automated classification of individuals should be public so that potential discrimination could be countered and fairness increased.

On the openness of the digital society: from religion via language … 311 need for a provision in the Act that exempt some parts of the automated decision-making procedure from the applicability of the Act on Administration. It nonetheless appears as if several other provisions Act on Administration become obsolete due to automated decision-making concerning congestion tax, preliminarily such as sections 9 on counsel, 11 and 12 on conflict of interest, 13 on requests of statement from other public authorities, 14 on oral procedure, and 16 on the party’s right to receive information.61 The exemption created by the law-maker is thus likely to be very limited in practice. On the basis of Chapter 5, Section 1 of the Customs Act (2000:1281), the imposition of customs duties may take place by means of automated decision-making, and the same is the case concerning taxation decisions on real property, which can take place by automated decision-making on the basis of Chapter 20, Section 2(a) of the Act on Property Tax, provided that it is permitted under Section 20(1), para. 1, of the Act on Administration not to give reasons for a decision. A similar reference to permitted exclusion of reasons as a precondition for automated decision-making exists for the old-age pension on the basis of Chapter 112, Sections 6 and 7, of the Social Security Act (2010:110), and this type of decision-making can also be used for the survivor’s pension and survivor’s supplement.62 However, it appears as if the lack of explicit provisions in Swedish law has not prevented Swedish public authorities from introducing automated decision-making for a host of other decision-making procedures.63 Therefore, although the examples above indicate that there is a greater awareness about the need to regulate automated decision-making in Sweden than in Finland, the legal framework for automated decision-making is still in a nascent state in both countries and does actually not exist as an 61   Section 8 on the right of interpreter, however, is not necessarily made obsolete, because in the decision concerning the congestion tax, the author of this article, who travelled with the car from Finland to Sweden, received a decision that had been translated into Finnish by an authorized translator, which indicates that a physical person had made the translation. However, that physical translator may have used software to do the actual translating. See also Magnusson Sjöberg (1992) 375, who seems to be of the opinion that it is more difficult to observe a conflict of interest in computer-based administrative procedure than in manual administrative procedure. Actually, what seems to be happening in our opinion is that ordinary situations of conflict of interest do not arise in automated decision-making (but such might of course be present when the software is produced). 62   Magnusson Sjöberg (2016) 308–309. 63   Magnusson Sjöberg (2016) 309.

312  Markku Suksi effective legal concept in the national legal order in a manner that would, in each case, require a legal basis in an Act of Parliament.64 As shown above, EU-law establishes that requirement from the point of view of data protection rules. It is also possible that such a requirement could be read into the legality principles and principles of rule of law in constitutions, such as section 2(3) of the Constitution of Finland and chapter 1, section 1(3) of the Form of Government (Constitution) Act of Sweden. As is evident on the basis of the limited review of automated decision-making above, automated algorithm-based decision-making is an important feature of public administration.65 With the aim set at an increased use of automated decision-making, it is likely that certain technical requirements will start to emerge concerning the construction of the norms on the basis of which automated decision-making will be used. Such technical requirements may involve the creation of very categorical sets of alternatives, which exclude any administrative discretion when applied in the automated process. Hence the wish to use an algorithmic decision-making process may lead to a certain construction of the underlying legal norms, such as clear answers (yes or no) that can be placed in boxes which can be ticked on the computer screen or read by optical means.66 The more exact and categorical the language of the law is made for the purposes of automated decision-making, the less need for manual procedures. The need for an algorithm to facilitate the creation of automated decision-making may become an important consideration when laws are drafted in the future. In other words, making the algorithm the basis for the exercise of 64   For a law-drafting project in Sweden on the legal conditions for a digitally interacting administration, see Kommittédirektiv 2016:98 – Rättsliga förutsättningar för en digitalt samverkande förvaltning. However, the project explicitly excludes proposals at the level of constitutional law. 65   For a Nordic and in particular a Danish perspective, see Niels Fenger, ‘Borgeren og digitalisert forvaltning – hvor går vi?’, Report at the 40th Nordic Meeting of Jurists, Oslo 2014, at http://nordiskjurist.org/wp-content/uploads/2014/07/referent6.pdf (accessed on 16 January 2017). 66   See Schartum (2016a) 20–23, for three overall models of computer-conscious law-making. See also Schartum (2016b): “In several branches of government administration in Norway, for example, public officials are actively amending the law by replacing discretionary and blurred facts with fixed facts that can be automatically accessed from machine-readable sources.” See also Schartum (2016b) for a description for the transformation of legal norms into logical operations performed by the algorithm, including a flowchart: “Legislation, for instance, containing vague or undefined concepts having an uncertain logical structure, not addressing all legal problems within the domain and so on, gives plenty of leeway for interpretation and thus potentially ample opportunity for people in the transformation process to act as if they were legislators.” For an analysis of transformation processes, see also Magnusson Sjöberg (1992) 191–275.

On the openness of the digital society: from religion via language … 313 public power may actually drive the law-maker in the production of provisions but at the same time frame the law-maker’s actions so as to eventually circumscribe the legislature. According to an interlocutor, limited transparency is not the only problem related to proprietary algorithms: in the worst case, the algorithm not only produces unfair results but may also resist change as the existing technology starts to frame what is “possible” in terms of new legislation. Therefore, transparent algorithm design and implementation are necessary to ameliorate possible problems. There is, however, a paradox here, because “the code will give precise and full information to everybody who could read it”: “Programming code is not written to be read by people, and will only have value for a very small group of programming experts. Hundreds or thousands of lines of programming code are not likely to create much openness.”67 This granted, it would nevertheless be of fundamental importance that the algorithms supporting automated decision-making of public authorities are open for review for all,68 although a non-expert might have to rely on the assistance of an expert, an “algoreader”. The position of an “algoreader” would not be very much unlike the position of a lawyer in relation to a person with a legal claim, where the lawyer assists the client in finding the meaning of law, nor unlike the position of a priest in relation to an individual who needs to be explained the meaning of God’s word (see figure 2 above).69 In parallel, a public authority could be given an oversight task including the review of the correctness of the algorithm.70

5.

Conclusions: Potential for Evolution and Legal Consequences

Algorithm-based decision-making in public administration is on the rise, and it is possible that around 2020s, societies are in the midst of a para  Schartum (2016a) 20.   In case an algorithm is understood as a document, it is not excluded that it could be accessible to everyone under the Swedish Freedom of the Press Act, as is possibly indicated by Magnusson Sjöberg (1992) 371, 496–498, 509, but the matter has probably not been tested as of yet by anyone requesting to obtain an algorithm from a public authority. 69   Pasquale (2015) 8: “Transparency is not just an end in itself, but an interim step on the road to intelligibility.” 70   Pasquale (2015) 141, 157, 165, 169. 67 68

314  Markku Suksi digm shift from a language-based legitimacy towards an algorithm-based legitimacy for the exercise of public authority. So far, automated decision-making is mainly practiced in areas where massive numbers of decisions are made, such as taxation, social benefits, and congestion, but it can be assumed that the automated procedure of decision-making will continue to spread to new areas. We should not be surprised if and when automated decision-making will be used for ever more complicated issues. The development of artificial intelligence from specific artificial intelligence needed for, for instance, autonomous vehicles and production of newspaper articles, towards or at least close to general artificial intelligence will most likely lead to the use of automated decision-making for ever more complicated decision-making situations. It seems that in such a scenario, an algorithm-based decision-making procedure could also deal with more complex information that is not of a binary nature, but requires “consideration” and taking into account features such as proportionality and feasibility, a talent that until now has been connected to physical decision-makers, that is, civil servants. The law-maker has been active in the area of automated decision-making from the 1980s on, adopting at the EU level a relatively negative attitude to automated decision-making but allowing it where, inter alia, the use of automated procedures is established in law and when the individual has a right to opt out from automated decision-making. The former requirement is probably not fulfilled very conscientiously, probably for the reason that public authorities are not too well aware of issues and problems with automated decision-making. Also, right of individual to opt out from automated procedures, which in effect translates into a right to claim a manual procedure, is not fulfilled in the specific legislation that might apply (and if fulfilled, massive opting out could mean that the resources of the civil service would be strained), except that regular review of the automated decisions through administrative review mechanisms and courts of law is still possible (and automated decision-making has the positive feature of making it easy to attach instructions for complaints to automated decisions). Complaints procedures appear, in fact, to be the only way in which an individual would be able to find out whether the digitalized decisions are correctly made. The same seems to apply to the transparency of the algorithm or the “logic” of the automated decision-making procedure: administrative agencies are unwilling to open up the detailed logic of the

On the openness of the digital society: from religion via language … 315 automated decision-making process and maintain secrecy in that respect, because transparency might compromise the efficiency of the automated procedure. This is probably an important legitimacy issue of the same kind that was present in 1766, when finally, the parliamentary papers and other public documents were made accessible to all. However, it appears that the right to know the “logic” is a right of the party only, not a general right of everyone, that is, of non-parties to the administrative matter. What happens when automated procedures of decision-making are used is that the underlying algorithm not only translates provisions in material law on, for instance, different levels of subsidies or tax percentages into digital form, but algorithm also replaces law on administrative procedure. The consequence on law of administrative procedure is that algorithm renders obsolete provisions in formal legislation concerning administrative procedure, which is a public yardstick for the private party concerning the correct handling of the administrative matter of the individual. It is also the yardstick for everyone else concerning the correct handling of administrative matters in general. If this yardstick of legitimacy for public administration cannot be applied to the algorithm for reasons of “secrecy”, the decision-making becomes non-transparent and – potentially – non-legitimate. Apparently, the digital society runs the risk of being much less open than society is today, perhaps entirely closed to the extent automated decision-making is used. The relationship between automated decision-making and the legal safeguards in law on administrative procedure may thus be complicated, in particular if a general mention in an act about the use of automated decision-making (or even the mere use of automated decision-making by the decision of the administrative agency) can potentially negate the applicability of an act on administrative procedure and set aside by default a number of the legal safeguards of good government. Currently, automated algorithm-based decision-making is generally regulated both at the EU level and in national law as a data protection issue, but that is clearly not sufficient. What the society and the law need in the near future is administrative law of the algorithm or, alternatively, a much more specific regulation of the algorithm-related issues in the data protection legislation, coupled with strong oversight bodies. However, it may be doubted whether it is a positive development at all that ordinary rules of administrative law and

316  Markku Suksi good government, including everyone’s right of access to information, are replaced by a more narrow data protection perspective. Furthermore, by using automated decision-making, civil servants are likely to be dissociated from making the decisions physically in a way which relieves civil servants from the various forms of individual liability they have under the law (in Finland, this would encompass liability under administrative law, criminal liability, and tort liability). If something goes wrong, what remains is – probably – a general liability of the “state” of some sort (as a collective denomination of the state, the municipalities, the other legally independent organs of public administration), or perhaps the liability of the producer of the software, provided that the procurement contract contains provisions to that effect if the software producer is not an in-house service. Therefore, also the broader issues of liability for automated decisions both under public law and private (civil) law should be addressed and specified in this new situation created by the new paradigm. Provided that a paradigm shift is at hand, we are, however, at the end faced with one fundamental question: will there be a rule of law or a rule of algorithm? If society increasingly turns into a rule of algorithm or at least into a rule by algorithm, the algorithm should be known and knowable to all in the same way as the law is expected to be.71 Therefore, in addition to an increasing number of rules at the level of ordinary law, there may come a time when the use of algorithm in public decision-making will have to be regulated at the level of constitutional law as in 1766, potentially as a right of everyone to know the construction of the algorithm, at the same time as such a constitutional provision gives frames and requirements for the more specific ordinary legislation that should be in place to steer the use of automated decision-making procedures of public authorities. There should be 71   Goldoni (2015) 128–129, thinks that it is difficult in the realm of code to adopt a rule of code approach, because code is not exactly like law. He nevertheless recognizes the core issue very well, when thinking that “we have also seen that when a particular code is ‘enacted’, it may be too late to remedy the violation of certain rights”. From this he draws the correct conclusion that “the accent should be put on the moment of production, rather than on the moment of distribution. The moment of production should be assessed according to two intertwined principles”, which are transparency and publicness. “Decisions, in order to be accountable, should be known and also the procedure that brought to that decision should be disclosed. The second principle that should guide the ‘writing’ of code is equal chance of participation to the process, which also entails the idea that the writing process should be as inclusive as possible.” In essence, our conclusions in this article appear to be generally in harmony with those of Goldoni (2015) and also with those of Magnusson Sjöberg (1992) and Schartum (2016a).

On the openness of the digital society: from religion via language … 317 a general right of access to at least such algorithms that are used by public authorities for automated decision-making. Markku Suksi is Professor of Public Law at Åbo Akademi University, Finland.