Right to be forgotten - when you get core principles wrong

The EU and members states need to act timely and augment the "Right to be forgotten".

The principle of a "Right to be forgotten" is fundamentally human and critical to maintain society stability in e networked age.

But the European "Right to be forgotten" is perhaps the most obvious example of legal failure as vital principles are not transformed into operational solutions.

The principle cannot be implemented as a legal-only structure. To become operational, it has to be enforceable and by nature of the problem, this require law to dictate technical design principles from preventive approach as technology design will otherwise override the legal principle making it void and "unworkable" - and as such easy to ignore by commercial or shady (read in reality non-democratic) government institutions.

The main technical design principle has to change so as to enable non-identification or contextual identity in legitimate society transactions including eCommerce and public health-care. This is both possible and critical for markets and democracy - and it is the only way to protect society from the likes of Google from acquiring destabilizing and self-reinforcing power over citizens and society processes.

1. The Right to be Forgotten address two very different problems.

a) Citizens own and MUST control their data. This is fundamental for freedom, security and prosperity (this blog will elaborate).

The legal-only thinking then assume that data are identified and as such citizens need means to have their data deleted. This would mean that e.g. Google - on request - has to delete ALL data collected about citizens including e.g. advertisement profiles and click-data collected through third-party through e.g. Google Analytics

b) Citizens rights to forgiveness and correctness. An important but much, much minor issue of right to personal integrity (eliminate errors and lies) and to - eventually and gradually - re-enter into society even if you, truthfully, have been involved in events that can later be detrimental for your ability to function (e.g. a crime in your youth).

The legal-only thinking in it most primitive form is that this amounts to the right for censorship of certain facts so e.g. Google should be forced and legitimized to filter out certain facts as part of their critical function as a search engine even though the sources (e.g. old news articles) are not corrected, deleted or "forgotten".

2. The principle of Right to be Forgotten require preventive technology thinking.

If you want an enforceable ability to be "forgotten", it does require

a) that the data was not made identifiable in the first place as the by far main design criteria of a digital world or

b) In cases of failure to the main design principle the ability of citizens to get a completely new identity.

Changing name is an effective way of "deleting the past. E.g. in UK, it was until recently deliberately legally and socially acceptable for citizens to relocate and assume a new name without informing anyone - provided it was not an act of trying to escape accountability of crimes. In Germany, as a reminiscence from the Nazi regimes thorough registration and tracking of Jews as the main reason Holocaust was so effective, it is still illegal to create identity structures across states in the federal structure. Witness relocation and police undercover involves the creation of entirely new identities in principle without any linkage to the old as powerful entities will do anything to locate person assuming the new identity.

However assuming an entirely new identity is a very costly, time-consuming and socially destructive way of solving security problems. Therefore the main - and preventive - principles should and MUST (in order to be adopted) be to ensure contextual identity, so each function or purpose is isolate from each-other and as such can be discarded if such need may arise.

As a general comment to EU Data Reform, it will be the claim that the principle of "Rigth to be Forgotten" is correct, but only makes sense if the root principle of NON-identification (or the technical ability to link non-related transactions with the same person) is implemented through technology.

If EU wants to solve the massive "Google" problem, it simply required for EU to begin protecting citizens when entering the digital networks as in ensuring that each transaction is under new "identities" as in new identifiers including everything related to the transaction. This is the only operational way to enforce the principles on Google and others preventing data abuse and market-destabilizing power concentration in infrastructure.

3. Google lobbyism is actively trying to sabotage the principles.

Google is far from the only commercial entity trying to undermine the principles of a free market, but Google is by far the largest and most powerful entity as Google has managed to use the self-reinforcing nature of winner-takes-all networking business models to systemically erode competition and drive this setup from one industry into the next.

Google's business model of systemic data abuse and market control is actively threatened by enforcement of EU data Regulation. Google therefore would do almost anything to sabotage or circumvent this regulation as they did and do with e.g. the EU ePrivacy directive related to Cookies.

Operationally, Google is trying to utilize the recent Legal Ruling in a Spanish case to create the media illusion, that the Right to Be Forgotten is only about censoring details of peoples past and not - what it is mainly about - related to all Google data profiling of individuals in order to profit from behavioral marketing.

And the lobbyism works just fine. E.g. in UK, the House of Lords EU sub-committee just characterized the Right to be Forgotten as "unworkable".

The real problem here is both that the treatment of said problem is wrong. We need to split this problem into two different problems. a) A really hard "media problem" - when - and especially how - are media required to be able to "forgive" past deeds. and b) Search engines role to track down source of such data to be "forgiven" or "forgotten" at source rather than the search engine having to "censor" the Internet.

Google is extremely smart here. By trying to turn the Data Reform into "legitimizing censorship", it is certain to backfire massively as it will generate a massive number of openly absurd cases and be utilized e.g. by both commercial organizations and government institutions to hide criticism. Google is simply waiting for the first attempt for some government institution, e.g UK GCHQ to require filtering "Snowden" or some claimed national security related information.

As this emerge, Google can simply claim the high chair working for "democracy" against such "censorship" creating an outcry against the "The Right to be Forgotten". Google is utilizing the same strategy on "Dictatorships"

But this is wrong - in both cases. Even though such filtering require a few resources, Google donĀ“t care about "forgetting" some detail (and Google still have the detail and only filter the search output - Google can still utilize the implied sensitivities in the active profiling of citizens). What Google really cares about is having control of personal data and behavioral profiles as the mission critical resource to control. By diverging the attention from the main problem and crate a non-solution to the illusion case, Google is actively trying to sabotage the legal principle.

Even though e.g. Facebook and Twitter may help disperse knowledge of regime wrongdoings, the deliberate lack of security in these structures (to facilitate the winner-takes-all network models) make these the most powerful tools for dictatorships ever. We already see how regimes in e.g. Eqypt, Turkey, US, UK, China, Russia and elsewhere use these data source to track-down and map the dissidents so they can target the leaders that fuel activism.

This will only get worse, if politicians - however non-intentional - legitimize history Revisionism and censorship on a grand scale by allowing technical structures to be built-in into infrastructure that filter reality. This will just be the European version of Chinas Great Digital Wall and it will be abused commercially to hide criticism and even fraud.

4. The EU and members states need to act timely and augment the "Right to be forgotten".

The principle of a "Right to be forgotten" is fundamentally human and critical to maintain society stability in e networked age. But it cannot be implemented as a legal-only structure. To become operational, it has to be enforceable and by nature of the problem, this require law to dictate technical design principles from preventive approach as technology design will otherwise override the legal principle making it void and "unworkable" - and as such easy to ignore by commercial or shady (read in reality non-democratic) government institutions.

The main technical design principle has to change so as to enable non-identification or contextual identity (YES eliminate digital identification) in all legitimate society transactions including eCommerce and public health-care. This is both possible and critical for markets and democracy - and it is the only way to protect society from the likes of Google from acquiring destabilizing and self-reinforcing power over citizens and society processes

As to the "Spanish case" - this is a minor problem. It is a real problem, but minor in the perspective. Both because it is complex trying to create general rules of when to "rewrite history" without enable serious abuse if not taking care of preventively e.g. through avoiding identification in the initial publication. But also because citizens - in really bad cases - under certain conditions even though a rather radical move have and should have the possibility of a social identity change to wipe the past clean and start over.