No terror arrests from stop and search in 2009 says UK government

The UK Home Office statistics showed that 101,248 people were stopped and searched in England, Wales and Scotland under Section 44 of the Terrorism Act. Of the 506 arrests that resulted, none was terrorism-related. After a ruling of the European Court of Human Rights UK police are not allowed to stop and search people unless they “reasonably suspect” them of being a terrorist.

The statistics also showed that no terror suspects had been held in custody before charge for longer than 14 days since 2007. The latest figures will raise doubts over the future of controversial powers which allow police to detain terror suspects for between 14 and 28 days before charging them. Detention and stop and search powers are being looked at as part of a review of the government’s counter-terrorism policy by the Liberal Democrat peer Lord Ken Macdonald, due to be published shortly.

Results of the first EU-wide survey on police stops and minorities and the FRA Guide on preventing discriminatory ethnic profiling

The European Union Agency for Fundamental Rights (FRA) has published results from its EU-MIDIS survey,
showing that minorities who perceive they are stopped because of their
minority background have a lower level of trust in the police.

EU-MIDIS Data in Focus 4: Police stops and minorities focuses on the experiences of police stops of the 23 500 individuals with an ethnic minority or immigrant background interviewed as part of  the survey. The report also contains results showing levels of trust in  the police.

Understanding and preventing discriminatory ethnic profiling: a guide aims to help the police address and avoid discriminatory ethnic  profiling, and is designed to be used as a tool for more effective  policing.

Other document(s)

  • Understanding and preventing discriminatory ethnic profiling: a guide  

Nationwide Suspicious Activity Reporting Initiative (NSI)

Thomas O’Reilly, Director, Nationwide Suspicious Activity Reporting Initiative Program Management Office, Department of Justice elaborated his thoughts on the programme here.

Law enforcement agencies across the U.S. receive thousands of
reports a day on suspicious activity.  But how do these agencies
determine what information is related to terrorism, or for that matter,
even related to a crime?  More importantly, how do we identify the
critical pieces of information in a way that protects the privacy, civil
rights and civil liberties of individuals? 
The Nationwide Suspicious Activity Reporting Initiative (NSI) has
created a common approach for gathering, documenting, processing,
analyzing, and sharing information about terrorism-related suspicious
activities.  This process is a behaviors based approach, where a SAR is
only documented when the “observed behaviors [are] reasonably indicative
of preoperational planning related to terrorism or other criminal
activity.” This helps mitigate the risk of profiling based on race,
ethnicity, national origin, or religious affiliation or activity.
A critical element of the NSI is the protection of Americans’
privacy, civil rights, and civil liberties, which led to the creation of
a comprehensive privacy protection framework that must be adhered to by
all sites as they implement and participate in the NSI.  The NSI Privacy Framework includes:
the development and adoption of written privacy, civil rights, and
civil liberties policies; the designation of a privacy and civil
liberties officer; the institution of the ISE-SAR Functional Standard business processes; and the training of NSI personnel before sites are permitted to post or access ISE-SARs to the ISE Shared Space.
A transparent process and collaboration with advocacy groups
will reinforce the ongoing commitment to earn and maintain the public
trust.  The NSI Program Management Office (PMO)
continues to build collaborative relationships with advocacy groups,
particularly since these groups have served an essential role in the
shaping and strengthening of the NSI Privacy Framework as well as in the
development and review of foundational products, such as the revision
of the ISE-SAR Functional Standard.  The NSI PMO strongly encourages
sites to engage members of the public, including privacy and civil
liberties advocacy groups and private sector partners, in the course of
development and implementation of the NSI.
The NSI has not only created a standardized process so SAR
information can be shared easily across jurisdictions, but it has also
led to stronger protections for the privacy, civil rights, and civil
liberties of Americans.  The ongoing success of the NSI largely depends
on the ability to earn and maintain the public’s trust, so it is crucial
that NSI partners continue to work together to maximize information
sharing while strengthening privacy, civil rights, and civil liberties

CTITF Basic Human Rights Reference Guides

The first two guides on the stopping and searching of persons and on
security infrastructure have just been posted on the CTITF website. The  Guides are an initiative of the CTITF Working Group on Protecting Human Rights while Countering Terrorism. They have been prepared to assist Member States in strengthening the protection of human rights in the context of countering terrorism and aim to provide guidance on how Member States can adopt human rights compliant measures in a number of counter-terrorism areas. The Guides also identify the critical human rights issues raised in these areas and highlight the relevant human rights principles and standards that must be respected. Each Guide comprises an introduction and a set of guiding principles and guidelines, which provide specific guidance to Member States based on universal principles and standards, followed by an explanatory text containing theoretical examples and descriptions of good practices. Each Guide is supported by reference materials (including references to relevant international human rights treaties and conventions, UN standards and norms, as well as general comments, jurisprudence and conclusions of human rights mechanisms; and to reports of UN independent experts, best practice examples and relevant documents prepared by United Nations entities and organizations.

Aviation security update

2010 Beijing Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation

Two new counterterrorism treaties—the 2010 Beijing Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation  and the 2010 Beijing Protocol to the 1971 Hague Convention on the  Suppression of Unlawful Seizure of Aircraft—were adopted in Beijing,  China, on September 10, 2010. According to the U.S. Department of State the treaties are meant to improve aviation security and “strengthen the existing international counterterrorism legal framework and facilitate the prosecution and extradition of those who seek to commit acts of terror.”

The instruments are a response to the 9/11 terrorist attacks and criminalize several “new and emerging threats to the safety of civil aviation, including using aircraft as a weapon.”  Also mentioned in the press release is the focus on greater cooperation among stages in  combating terrorism and the continued need to ensure “human rights and fair treatment of terrorist suspects.”

Finally, the 2010 Beijing Convention criminalizes “the transport of biological, chemical, and nuclear weapons and related material.”

2nd meeting of the EU’s body scanner task force

The Commission stresses that COM 311 of this year indicates clearly that security scanners have a better detection rate than metal detectors; this is a “very clear conclusion” which can’t be disregarded.

The Commission announces that another impact assessment will be ready ‘early 2011’. After this assessment the Commission wil possibly come with a legislative proposal under comitology.

The ICAO Assembly will give the opportunity to have a ‘global approach’ to airport security.

List of participants at the meeting here.

New UK government response to recommendations of home affairs committee on use of body scanners

According to the government:

“EC regulations currently restrict the use of security scanners to being used as an additional measure once passengers have already been through existing security controls.”

The UK believes that EU regulations should require member states to produce and publish codes of practice which set out how passengers’ rights will be protected under applicable European and national law.

On profiling:

The Government makes an important distinction between ‘profiling’ where passengers are selected on the basis of personal characteristics, possibly in a potentially discriminatory manner and “targeting” where selection is made based on prior information and / or intelligence or on the basis of showing certain behaviours. Behavioural analysis may be one means of doing this.

Former Detainees Join Federal Court Challenge to Post-9/11 Racial Profiling and Abuse of Muslim, Arab and South Asian Men

The Center for Constitutional Rights (CCR) announced that six new plaintiffs have joined a federal, class action lawsuit, Turkmen v. Ashcroft, challenging their detention and mistreatment by prison guards and high level Bush administration officials in the wake of 9/11. In papers filed in Federal Court in Brooklyn, CCR details new allegations linking former Attorney General Ashcroft and other top Bush administration officials to the illegal roundups and abuse of the detainees.

The new suit names as defendants then-Attorney General John Ashcroft, FBI Director Robert Mueller, former INS Commissioner James Ziglar and officials at the Metropolitan Detention Center in Brooklyn, where the plaintiffs were held. It includes additional detail regarding high-level involvement in racial profiling and abuse, including allegations that former Attorney General Ashcroft ordered the INS and FBI to investigate individuals for ties to terrorism by, among other means, looking for Muslim-sounding names in the phonebook.

Academic round-up on privacy and new technologies, including search engines

The End of the Net as We Know it? Deep Packet Inspection and Internet Governance
by Ralf Bendrath and Milton Mueller

Advances in network equipment now allow internet service providers to monitor the content of data packets in real-time and make decisions about how to handle them. If deployed widely this technology, known as deep packet inspection (DPI), has the potential to alter basic assumptions that have underpinned Internet governance to date. The paper explores the way Internet governance is responding to deep packet inspection and the political struggles around it. Avoiding the extremes of technological determinism and social constructivism, it integrates theoretical approaches from the sociology of technology and actor-centered institutionalism into a new framework for technology-aware policy analysis.

Keywords: Internet governance, Internet regulation, Deep Packet Inspection, Privacy, Surveillance, Censorship, Internet service providers, Actor-Centered Institutionalism, Disruptive technology, Socio-technical systems, Network Neutrality, Social Construction of Technology, Technological Determinism

The Legality of Deep Packet Inspection – Angela Daly
European University Institute – Department of Law (LAW)


Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application’s operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet.

This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.

Keywords: deep packet inspection, net neutrality, US, EU, privacy, competition, free expression, privacy, copyright

The Boundaries of Privacy Harm – M. Ryan Calo
Stanford Law School


Just as a burn is an injury caused by heat, so is privacy harm a unique injury with specific boundaries and characteristics. This Essay describes privacy harm as falling into two related categories. The subjective category of privacy harm is the unwanted perception of observation. This category describes unwelcome mental states – anxiety, embarrassment, fear – that stem from the belief that one is being watched or monitored. Examples include everything from a landlord listening in on his tenants to generalized government surveillance.

The objective category of privacy harm is the unanticipated or coerced use of information concerning a person against that person. These are negative, external actions justified by reference to personal information. Examples include identity theft, the leaking of classified information that reveals an undercover agent, and the use of a drunk-driving suspect’s blood as evidence against him.

The subjective and objective categories of privacy harm are distinct but related. Just as assault is the apprehension of battery, so is the unwanted perception of observation largely an apprehension of information-driven injury. The categories represent, respectively, the anticipation and consequence of a loss of control over personal information.

The approach offers several advantages. It uncouples privacy harm from privacy violations, demonstrating that no person need commit a privacy violation for privacy harm to occur (and vice versa). It creates a “limiting principle” capable of revealing when another value – autonomy or equality, for instance – is more directly at stake. It also creates a “rule of recognition” that permits the identification of a privacy harm when no other harm is apparent. Finally, the approach permits the sizing and redress of privacy harm in novel ways.

Privacy and Regulatory Innovation: Moving Beyond Voluntary Codes – Ira Rubinstein
Information Law Institute, NYU School of Law

NYU School of Law, Public Law Research Paper No. 10-16
I/S: A Journal of Law and Policy for the Information Society, Forthcoming Winter 2011


According to its many critics, privacy self-regulation is a failure. It suffers from weak or incomplete realization of Fair Information Practice Principles, inadequate incentives to ensure wide scale industry participation, ineffective compliance and enforcement mechanisms, and an overall lack of transparency. Rather than attacking or defending self-regulation, this Article explores co-regulatory approaches in which government plays a role in setting requirements for industry guidelines and imposing sanctions for non-compliance. Based on three case studies of a weakly mandated industry code aimed at online behavioral advertising practices, a more strongly mandated program enabling data flows between Europe and the US, and a safe harbor program designed to protect children’s privacy, this Article argues that statutory safe harbors have many strengths but would benefit from being redesigned. Next it conceptualizes new models for privacy co-regulation based on insights derived from “second generation” environmental policy instruments such as environmental covenants. Finally, it offers specific recommendations – to the FTC, on how it might begin to use the covenanting approach to experiment with innovative technologies and address hard problems such as online behavioral advertising, and to Congress on how best to structure new safe harbor programs as an essential component of omnibus consumer privacy legislation. All of these approaches to regulatory innovation move beyond purely voluntary codes in favor of co-regulatory solutions.

The Mandatory Registration of SIM Cards – Ewan Sutherland
LINK Centre, University of Witwatersrand; CRID, University of Namur

March 8, 2010

Computer and Telecommunications Law Review, pp. 61-63, 2010


To support investigations into a range of crimes, governments and regulators are requiring mobile operators to register all pre-paid SIM-cards by collecting personal details to be made available in a central database to police and security services. Unregistered SIM-cards are being barred. Brief descriptions are provided of Botswana, Jordan, Kenya, Nigeria, Pakistan, Singapore, Tanzania and Vietnam.

Trawling DNA Databases for Partial Matches: What is the FBI Afraid of? – David H. Kaye
The Pennsylvania State University Dickinson School of Law

Cornell Journal of Law and Public Policy, Vol. 19, No. 1, 2009
Penn State Legal Studies Research Paper No. 8-2010


DNA evidence is often presented as the “gold standard” for forensic science. But this was not always the case. For years, eminent scientists complained that the estimates of the tiny frequencies of DNA types were unfounded. It took scores of research papers, dozens of judicial opinions, and two committees of the National Academy of Sciences to resolve the dispute by the mid-1990s. Since 2000, however, reports have surfaced of shocking numbers of “partial matches” among samples within large DNA databases, and some scientists have complained that the infinitesimal figures used in court to estimate the probability of a random match are no better than alchemy. To study the partial-match phenomenon further, defendants have sought to discover all the DNA records (with personal identifiers removed) kept in offender databases. The FBI has responded by branding the proposed research as useless and the release of the data as an illegal invasion of privacy. The media have reacted by calling for congressional hearings and, possibly, criminal charges against FBI officials.

This Article reviews the existing research findings and considers the scientific, legal, and ethical objections to disclosure of the DNA data. It concludes that the arguments against further research are unpersuasive. At the same time, it finds that the claims of dramatic departures from the expected numbers of partial matches are exaggerated and predicts that new research will not reveal unknown flaws in the procedure for estimating the chance of a match to an unrelated individual. In view of the importance of DNA evidence to the criminal justice system, this Article recommends using the databases for more statistical research than has been undertaken so far. It also calls for dissemination of the anonymized records for this purpose.

Keywords: DNA evidence, probability, population genetics, DNA databases, birthday paradox, partial match

Privacy in Search Engines: Negotiating Control – Federica Casarosa
Robert Schuman Centre fo Advanced Studies


Internet and generally modern communication technologies have radically modified current society, bringing in new risks for citizens’ privacy. The tangible effects of such technological progress have been, on the one hand, the improvements of tools for retrieval and collection of data, and, on the other hand, the increased capability of storage and aggregation of collected information. This can be interpreted positively in terms of much greater and better opportunities for the development of personality, making available information previously inaccessible (due to high cost or efforts needed to access). However, the same technical tools can be used to achieve the opposite result: prevent the expression of users’ personality through a continuous, though imperceptible, control that could shift the interpretation of user profiles from a pre-judgment into a prejudice.

From a legal point of view, different solutions have been put forward, descending from different approaches. On the one hand, we can observe the case of self-regulation, where technology itself can help to limit the aforementioned risks for personal data; on the other hand, we can take the example of legislative harmonization implemented by the Member States in the EU, where the monitoring activity is carried out by independent authorities, the so called Data protection Authorities.

A recent example that can show the market dynamics and the legal reactions concerning data protection, is the search engine Google, which received a brief but significant letter from the Art 29 Working Party (hereinafter Art 29 WP)11 due to the low level of protection assured by the Mountain-view society in the delivery of its services. The intervention, though not binding, has been the first step for Google in the direction of an improvement of its data protection policy, so as to achieve the level required by European legislation.

Keywords: privacy, search engines, internet, private regulation

Search Query Privacy: The Problem of Anonymization – Ron A. Dolin

Hastings Science and Technology Law Journal, Vol. 2, No. 2, p. 137, Summer 2010


Search queries may reveal quite sensitive information about the querier. Even though many queries are not directly associated with a particular person, it has been argued that the IP addresses and cookies of the users can often be sufficient to figure out who the querier is, especially if tied to information from ISPs regarding IP address assignments at the time of the relevant query. Given that the queries have been subject to discovery both by various governments and third parties, there has been great concern for how to keep such queries private. A typical approach to such privacy legislation, especially in Europe, has been to require either destruction of the data so that it is no longer available for discovery, or anonymization so that it cannot be associated with a particular person. This solution has never been proposed for personal data such as medical information used by doctors or financial information used by credit agencies. Instead, there seems to be an assumption about these types of data that their long-term storage is necessary and/or beneficial to the individual associated with them, or at least to society at large. The framework for maintaining the privacy of these data turns on safeguards where it is being held, user control of its retention and accuracy, and strict legal limitations regarding its discovery. This article briefly reviews a few legal frameworks for data protection both in the U.S. and in Europe. It presents several arguments that the deletion or anonymization of search query data is problematic, and describes a framework similar to the way we handle health data that is more beneficial to all stakeholders. Such an approach would lead to a more uniform solution to data protection in which maintaining search query privacy would not sacrifice the benefits of long term, confidential storage of the data.

Privacy Revisited – GPS Tracking as Search and Seizure – Bennett L. Gershman
Pace University – School of Law


Part I of this Article discusses the facts in People v. Weaver, the majority and dissenting opinions in the Appellate Division, Third Department, and the majority and dissenting opinions in the Court of Appeals. Part II addresses the question that has yet to be decided by the U.S. Supreme Court – whether GPS tracking of a vehicle by law enforcement constitutes a search under the Fourth Amendment. Part III addresses the separate question that the Court of Appeals did not address – whether the surreptitious attachment of a GPS device to a vehicle constitutes a seizure under the Fourth Amendment. The Article concludes that law enforcement’s use of a GPS device to track the movements of a vehicle continuously for an extended period of time is a serious intrusion into a motorist’s reasonable expectation of privacy that constitutes a search under the Fourth Amendment. Moreover, although the issue is somewhat murkier, the attachment of the GPS to a vehicle may constitute a seizure under the Fourth Amendment.

Biometrics, Retinal Scanning, and the Right to Privacy in the 21st Century – Stephen Hoffman
University of Minnesota – Twin Cities – School of Law – Syracuse Science and Technology Law Reporter, 2010


Biometric identification techniques such as retinal scanning and fingerprinting have now become commonplace, but near-future improvements on these methods present troubling issues for personal privacy. For example, retinal scanning can be used to diagnose certain medical conditions, even ones for which the patient has no symptoms or has any other way of detecting the problem. If a health insurance company scans the retinas of potential clients before they purchase coverage, they could be charged higher premiums for conditions that do not present any issues. Not only is this unfair, but the ease with which these scans can be conducted – including scanning without the subject’s consent or knowledge – present disturbing privacy concerns and suggest an Orwellian future, controlled by Big Business rather than Big Brother.

Keywords: biometrics, biometric identification, retinal scanning

Othello Error: Facial Profiling, Privacy and the Suppression of Dissent – Lenese C. Herbert

Ohio State Journal of Criminal Law, Vol. 5, pp. 79-129, 2007


In this article, Professor Herbert challenges the US. Transportation Security Administration’s post-September 11, 2001, use of Paul Elanan and Wallace Friesen’s Facial Action Coding System (FACS) to identify potential terrorists in American airports. Professor Herbert asserts that invasive visual examination of travelers’ faces and facial expressions for law enforcement purposes under the auspices of protective administrative searches ineffectively protects national and airport security and violates reasonable expectations of privacy. FACS improperly provides unreasonable governmental activity with a legitimizing scientific imprimatur that conceals governmental agents’ race- and ethnicity-based prejudices, which leads to targeting minorities’ faces as portents of danger. Professor Herbert assesses the concept offacial privacy in public, and in doing so, rejects the Supreme Court’s Katz v. United States test and argues in support of constitutional protection of public privacy.

Privacy by Deletion: The Need for a Global Data Deletion Principle – Benjamin J. Keele

Indiana Journal of Global Legal Studies, Vol. 16, No. 1, pp. 363-384

With global personal information flows increasing, efforts have been made to develop principles to standardize data protection regulations. However, no set of principles has yet achieved universal adoption. This note proposes a principle mandating that personal data be securely destroyed when it is no longer necessary for the purpose for which it was collected. Including a data deletion principle in future data protection standards will increase respect for individual autonomy and decrease the risk of abuse of personal data. Though data deletion is already practiced by many data controllers, including it in legal data protection mandates will further the goal of establishing an effective global data protection regime.

A Paradigm Shift in Electronic Surveillance Law – Mark Klamberg
Nordic Yearbook of Law and Information Technology, 2010


Electronic surveillance law is subject to a paradigm shift where traditional principles are reconsidered and the notion of privacy has to be reconstructed. This paradigm shift is the result of four major changes in our society with regard to 1) technology; 2) perceptions of threats, 3) interpretation of human rights and 4) ownership over telecommunications. The above-mentioned changes have created a need to reform both the tools of electronic surveillance and domestic legislation. Surveillance that was previously kept secret is now subject to public debate. The article focuses on systems of “mass surveillance” such as data retention and signal intelligence and whether these are consistent with the European Convention on Human Rights.

Keywords: Electronic Surveillance, Privacy, Signal Intelligence

New Challenges to Data Protection Study – Country Report: United States – Chris Jay Hoofnagle

European Commission Directorate-General Justice, Freedom and Security Report, May 2010


This report is one of 11 country reports produced for the “New Challenges to Data Protection” study, commissioned by the European Commission, and describes the ways in which US law addresses the challenges posed by the new social-technical-political environment.

The hallmark of the US federal approach to privacy is sectoral regulation. A panoply of statutes now regulates specific types of government and business practices, with no broadly-applicable privacy statute governing data collection, use, or disclosure. The Federal Trade Commission has encouraged self-regulation in a number of sectors, and the development of privacy-enhancing technologies. The US approach to privacy is incoherent, sectorally-based, and largely driven by outrage at particular, narrow practices. Still, several innovations from the US approach deserve attention internationally.

First, increasingly, privacy statutes create evolving standards of care, thus encouraging innovation for handling of data and avoiding the reification that can result from prescriptive, detailed regulation. For instance, the Fair Credit Reporting Act mandates an evolving “maximum possible accuracy” standard.

Second, in the direct marketing context, the US has imposed advertiser liability for violations of telemarketing, fax, and spam laws. This is a promising approach to address the use of difficult-to-identify and prosecute service providers that are responsible for illegal marketing campaigns.

Third, audit requirements for access to personal information has had a profound effect in encouraging industry and citizen policing of privacy violations. Audit logs have substantiated long-suspected privacy problems regarding “browsing” of files, and news media access to celebrities’ medical records.

Fourth, the US has briefly experimented with “data provenance,” a requirement that buyers of personal information exercise diligence to ensure against misuse of data. Data provenance responsibilities can create incentives to reduce gray and black market sales of personal information.

Finally, most federal privacy law acts as a floor of protections, allowing states to enact stronger rules. This has created a tension between state and federal governments, resulting in a leveling up of protections, because states (which tend to be more activist on privacy issues) can act where the US Congress is occupied with other issues.

Internet Jurisdiction and Data Protection Law: An International Legal Analysis – Christopher Kuner

International Journal of Law and Information Technology, 2010


Data protection law has been the subject of an increasing number of jurisdictional disputes, which have largely been driven by the ubiquity of the Internet, the interconnectedness of the global economy, and the growth of data protection law around the world in recent years. There are also an increasing number of instances where data protection law conflicts with legal obligations in other areas. Moreover, the rapid development of new computing techniques (such as so-called ‘cloud computing’) is putting even greater pressure on traditional jurisdictional theories. Jurisdictional uncertainties about data protection law have important implications, since they may dissuade individuals and companies from engaging in electronic commerce, can prove unsettling for individuals whose personal data are processed, and impose burdens on regulators. These difficulties are increased by the fact that, so far, there is no binding legal instrument of global application covering either jurisdiction on the Internet or data protection. This article examines international jurisdiction as it relates to data protection law, and specifically to instances in which jurisdiction under data protection law may be considered ‘exorbitant’, with a particular focus on rules of public international law.

Developing an Adequate Legal Framework for International Data Transfers – Christopher Kuner

REINVENTING DATA PROTECTION? S. Gutwirth, eds., pp. 263-273, Springer Science+Business Media B.V., 2009


With the EU Data Protection Directive having been in force now for nearly ten years, it is wise to examine the basic concepts and assumptions on which the Directive is based, to determine whether it is functioning properly. It is the thesis of this paper that the present EU legal framework for “adequacy” decisions for the international transfer of personal data is inadequate, in both a procedural and substantive sense, and needs reform. The framework was created for a world in which the Internet was not widely used, and in which data did not flow as easily across national borders as they do now. The present system of adequacy decisions has been grievously overloaded by the great increase in data flows in the past few years, and also drains resources that could better be used in other areas of data protection. European policymakers should take a hard look at the current adequacy system and its present failings, and reform the system in a way that more effectively protects the interests of data controllers, individuals, and data protection supervisory authorities.

Keywords: European Union, data protection, privacy, adequacy, international data transfers, global data flows, APEC, accountability