CTITF Basic Human Rights Reference Guides

The first two guides on the stopping and searching of persons and on
security infrastructure have just been posted on the CTITF website. The  Guides are an initiative of the CTITF Working Group on Protecting Human Rights while Countering Terrorism. They have been prepared to assist Member States in strengthening the protection of human rights in the context of countering terrorism and aim to provide guidance on how Member States can adopt human rights compliant measures in a number of counter-terrorism areas. The Guides also identify the critical human rights issues raised in these areas and highlight the relevant human rights principles and standards that must be respected. Each Guide comprises an introduction and a set of guiding principles and guidelines, which provide specific guidance to Member States based on universal principles and standards, followed by an explanatory text containing theoretical examples and descriptions of good practices. Each Guide is supported by reference materials (including references to relevant international human rights treaties and conventions, UN standards and norms, as well as general comments, jurisprudence and conclusions of human rights mechanisms; and to reports of UN independent experts, best practice examples and relevant documents prepared by United Nations entities and organizations.

Aviation security update

2010 Beijing Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation

Two new counterterrorism treaties—the 2010 Beijing Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation  and the 2010 Beijing Protocol to the 1971 Hague Convention on the  Suppression of Unlawful Seizure of Aircraft—were adopted in Beijing,  China, on September 10, 2010. According to the U.S. Department of State the treaties are meant to improve aviation security and “strengthen the existing international counterterrorism legal framework and facilitate the prosecution and extradition of those who seek to commit acts of terror.”

The instruments are a response to the 9/11 terrorist attacks and criminalize several “new and emerging threats to the safety of civil aviation, including using aircraft as a weapon.”  Also mentioned in the press release is the focus on greater cooperation among stages in  combating terrorism and the continued need to ensure “human rights and fair treatment of terrorist suspects.”

Finally, the 2010 Beijing Convention criminalizes “the transport of biological, chemical, and nuclear weapons and related material.”

2nd meeting of the EU’s body scanner task force

The Commission stresses that COM 311 of this year indicates clearly that security scanners have a better detection rate than metal detectors; this is a “very clear conclusion” which can’t be disregarded.

The Commission announces that another impact assessment will be ready ‘early 2011’. After this assessment the Commission wil possibly come with a legislative proposal under comitology.

The ICAO Assembly will give the opportunity to have a ‘global approach’ to airport security.

List of participants at the meeting here.

New UK government response to recommendations of home affairs committee on use of body scanners

According to the government:

“EC regulations currently restrict the use of security scanners to being used as an additional measure once passengers have already been through existing security controls.”

The UK believes that EU regulations should require member states to produce and publish codes of practice which set out how passengers’ rights will be protected under applicable European and national law.

On profiling:

The Government makes an important distinction between ‘profiling’ where passengers are selected on the basis of personal characteristics, possibly in a potentially discriminatory manner and “targeting” where selection is made based on prior information and / or intelligence or on the basis of showing certain behaviours. Behavioural analysis may be one means of doing this.

Former Detainees Join Federal Court Challenge to Post-9/11 Racial Profiling and Abuse of Muslim, Arab and South Asian Men

The Center for Constitutional Rights (CCR) announced that six new plaintiffs have joined a federal, class action lawsuit, Turkmen v. Ashcroft, challenging their detention and mistreatment by prison guards and high level Bush administration officials in the wake of 9/11. In papers filed in Federal Court in Brooklyn, CCR details new allegations linking former Attorney General Ashcroft and other top Bush administration officials to the illegal roundups and abuse of the detainees.

The new suit names as defendants then-Attorney General John Ashcroft, FBI Director Robert Mueller, former INS Commissioner James Ziglar and officials at the Metropolitan Detention Center in Brooklyn, where the plaintiffs were held. It includes additional detail regarding high-level involvement in racial profiling and abuse, including allegations that former Attorney General Ashcroft ordered the INS and FBI to investigate individuals for ties to terrorism by, among other means, looking for Muslim-sounding names in the phonebook.

Academic round-up on privacy and new technologies, including search engines

The End of the Net as We Know it? Deep Packet Inspection and Internet Governance
by Ralf Bendrath and Milton Mueller

Advances in network equipment now allow internet service providers to monitor the content of data packets in real-time and make decisions about how to handle them. If deployed widely this technology, known as deep packet inspection (DPI), has the potential to alter basic assumptions that have underpinned Internet governance to date. The paper explores the way Internet governance is responding to deep packet inspection and the political struggles around it. Avoiding the extremes of technological determinism and social constructivism, it integrates theoretical approaches from the sociology of technology and actor-centered institutionalism into a new framework for technology-aware policy analysis.

Keywords: Internet governance, Internet regulation, Deep Packet Inspection, Privacy, Surveillance, Censorship, Internet service providers, Actor-Centered Institutionalism, Disruptive technology, Socio-technical systems, Network Neutrality, Social Construction of Technology, Technological Determinism

The Legality of Deep Packet Inspection – Angela Daly
European University Institute – Department of Law (LAW)


Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application’s operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet.

This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.

Keywords: deep packet inspection, net neutrality, US, EU, privacy, competition, free expression, privacy, copyright

The Boundaries of Privacy Harm – M. Ryan Calo
Stanford Law School


Just as a burn is an injury caused by heat, so is privacy harm a unique injury with specific boundaries and characteristics. This Essay describes privacy harm as falling into two related categories. The subjective category of privacy harm is the unwanted perception of observation. This category describes unwelcome mental states – anxiety, embarrassment, fear – that stem from the belief that one is being watched or monitored. Examples include everything from a landlord listening in on his tenants to generalized government surveillance.

The objective category of privacy harm is the unanticipated or coerced use of information concerning a person against that person. These are negative, external actions justified by reference to personal information. Examples include identity theft, the leaking of classified information that reveals an undercover agent, and the use of a drunk-driving suspect’s blood as evidence against him.

The subjective and objective categories of privacy harm are distinct but related. Just as assault is the apprehension of battery, so is the unwanted perception of observation largely an apprehension of information-driven injury. The categories represent, respectively, the anticipation and consequence of a loss of control over personal information.

The approach offers several advantages. It uncouples privacy harm from privacy violations, demonstrating that no person need commit a privacy violation for privacy harm to occur (and vice versa). It creates a “limiting principle” capable of revealing when another value – autonomy or equality, for instance – is more directly at stake. It also creates a “rule of recognition” that permits the identification of a privacy harm when no other harm is apparent. Finally, the approach permits the sizing and redress of privacy harm in novel ways.

Privacy and Regulatory Innovation: Moving Beyond Voluntary Codes – Ira Rubinstein
Information Law Institute, NYU School of Law

NYU School of Law, Public Law Research Paper No. 10-16
I/S: A Journal of Law and Policy for the Information Society, Forthcoming Winter 2011


According to its many critics, privacy self-regulation is a failure. It suffers from weak or incomplete realization of Fair Information Practice Principles, inadequate incentives to ensure wide scale industry participation, ineffective compliance and enforcement mechanisms, and an overall lack of transparency. Rather than attacking or defending self-regulation, this Article explores co-regulatory approaches in which government plays a role in setting requirements for industry guidelines and imposing sanctions for non-compliance. Based on three case studies of a weakly mandated industry code aimed at online behavioral advertising practices, a more strongly mandated program enabling data flows between Europe and the US, and a safe harbor program designed to protect children’s privacy, this Article argues that statutory safe harbors have many strengths but would benefit from being redesigned. Next it conceptualizes new models for privacy co-regulation based on insights derived from “second generation” environmental policy instruments such as environmental covenants. Finally, it offers specific recommendations – to the FTC, on how it might begin to use the covenanting approach to experiment with innovative technologies and address hard problems such as online behavioral advertising, and to Congress on how best to structure new safe harbor programs as an essential component of omnibus consumer privacy legislation. All of these approaches to regulatory innovation move beyond purely voluntary codes in favor of co-regulatory solutions.

The Mandatory Registration of SIM Cards – Ewan Sutherland
LINK Centre, University of Witwatersrand; CRID, University of Namur

March 8, 2010

Computer and Telecommunications Law Review, pp. 61-63, 2010


To support investigations into a range of crimes, governments and regulators are requiring mobile operators to register all pre-paid SIM-cards by collecting personal details to be made available in a central database to police and security services. Unregistered SIM-cards are being barred. Brief descriptions are provided of Botswana, Jordan, Kenya, Nigeria, Pakistan, Singapore, Tanzania and Vietnam.

Trawling DNA Databases for Partial Matches: What is the FBI Afraid of? – David H. Kaye
The Pennsylvania State University Dickinson School of Law

Cornell Journal of Law and Public Policy, Vol. 19, No. 1, 2009
Penn State Legal Studies Research Paper No. 8-2010


DNA evidence is often presented as the “gold standard” for forensic science. But this was not always the case. For years, eminent scientists complained that the estimates of the tiny frequencies of DNA types were unfounded. It took scores of research papers, dozens of judicial opinions, and two committees of the National Academy of Sciences to resolve the dispute by the mid-1990s. Since 2000, however, reports have surfaced of shocking numbers of “partial matches” among samples within large DNA databases, and some scientists have complained that the infinitesimal figures used in court to estimate the probability of a random match are no better than alchemy. To study the partial-match phenomenon further, defendants have sought to discover all the DNA records (with personal identifiers removed) kept in offender databases. The FBI has responded by branding the proposed research as useless and the release of the data as an illegal invasion of privacy. The media have reacted by calling for congressional hearings and, possibly, criminal charges against FBI officials.

This Article reviews the existing research findings and considers the scientific, legal, and ethical objections to disclosure of the DNA data. It concludes that the arguments against further research are unpersuasive. At the same time, it finds that the claims of dramatic departures from the expected numbers of partial matches are exaggerated and predicts that new research will not reveal unknown flaws in the procedure for estimating the chance of a match to an unrelated individual. In view of the importance of DNA evidence to the criminal justice system, this Article recommends using the databases for more statistical research than has been undertaken so far. It also calls for dissemination of the anonymized records for this purpose.

Keywords: DNA evidence, probability, population genetics, DNA databases, birthday paradox, partial match

Privacy in Search Engines: Negotiating Control – Federica Casarosa
Robert Schuman Centre fo Advanced Studies


Internet and generally modern communication technologies have radically modified current society, bringing in new risks for citizens’ privacy. The tangible effects of such technological progress have been, on the one hand, the improvements of tools for retrieval and collection of data, and, on the other hand, the increased capability of storage and aggregation of collected information. This can be interpreted positively in terms of much greater and better opportunities for the development of personality, making available information previously inaccessible (due to high cost or efforts needed to access). However, the same technical tools can be used to achieve the opposite result: prevent the expression of users’ personality through a continuous, though imperceptible, control that could shift the interpretation of user profiles from a pre-judgment into a prejudice.

From a legal point of view, different solutions have been put forward, descending from different approaches. On the one hand, we can observe the case of self-regulation, where technology itself can help to limit the aforementioned risks for personal data; on the other hand, we can take the example of legislative harmonization implemented by the Member States in the EU, where the monitoring activity is carried out by independent authorities, the so called Data protection Authorities.

A recent example that can show the market dynamics and the legal reactions concerning data protection, is the search engine Google, which received a brief but significant letter from the Art 29 Working Party (hereinafter Art 29 WP)11 due to the low level of protection assured by the Mountain-view society in the delivery of its services. The intervention, though not binding, has been the first step for Google in the direction of an improvement of its data protection policy, so as to achieve the level required by European legislation.

Keywords: privacy, search engines, internet, private regulation

Search Query Privacy: The Problem of Anonymization – Ron A. Dolin

Hastings Science and Technology Law Journal, Vol. 2, No. 2, p. 137, Summer 2010


Search queries may reveal quite sensitive information about the querier. Even though many queries are not directly associated with a particular person, it has been argued that the IP addresses and cookies of the users can often be sufficient to figure out who the querier is, especially if tied to information from ISPs regarding IP address assignments at the time of the relevant query. Given that the queries have been subject to discovery both by various governments and third parties, there has been great concern for how to keep such queries private. A typical approach to such privacy legislation, especially in Europe, has been to require either destruction of the data so that it is no longer available for discovery, or anonymization so that it cannot be associated with a particular person. This solution has never been proposed for personal data such as medical information used by doctors or financial information used by credit agencies. Instead, there seems to be an assumption about these types of data that their long-term storage is necessary and/or beneficial to the individual associated with them, or at least to society at large. The framework for maintaining the privacy of these data turns on safeguards where it is being held, user control of its retention and accuracy, and strict legal limitations regarding its discovery. This article briefly reviews a few legal frameworks for data protection both in the U.S. and in Europe. It presents several arguments that the deletion or anonymization of search query data is problematic, and describes a framework similar to the way we handle health data that is more beneficial to all stakeholders. Such an approach would lead to a more uniform solution to data protection in which maintaining search query privacy would not sacrifice the benefits of long term, confidential storage of the data.

Privacy Revisited – GPS Tracking as Search and Seizure – Bennett L. Gershman
Pace University – School of Law


Part I of this Article discusses the facts in People v. Weaver, the majority and dissenting opinions in the Appellate Division, Third Department, and the majority and dissenting opinions in the Court of Appeals. Part II addresses the question that has yet to be decided by the U.S. Supreme Court – whether GPS tracking of a vehicle by law enforcement constitutes a search under the Fourth Amendment. Part III addresses the separate question that the Court of Appeals did not address – whether the surreptitious attachment of a GPS device to a vehicle constitutes a seizure under the Fourth Amendment. The Article concludes that law enforcement’s use of a GPS device to track the movements of a vehicle continuously for an extended period of time is a serious intrusion into a motorist’s reasonable expectation of privacy that constitutes a search under the Fourth Amendment. Moreover, although the issue is somewhat murkier, the attachment of the GPS to a vehicle may constitute a seizure under the Fourth Amendment.

Biometrics, Retinal Scanning, and the Right to Privacy in the 21st Century – Stephen Hoffman
University of Minnesota – Twin Cities – School of Law – Syracuse Science and Technology Law Reporter, 2010


Biometric identification techniques such as retinal scanning and fingerprinting have now become commonplace, but near-future improvements on these methods present troubling issues for personal privacy. For example, retinal scanning can be used to diagnose certain medical conditions, even ones for which the patient has no symptoms or has any other way of detecting the problem. If a health insurance company scans the retinas of potential clients before they purchase coverage, they could be charged higher premiums for conditions that do not present any issues. Not only is this unfair, but the ease with which these scans can be conducted – including scanning without the subject’s consent or knowledge – present disturbing privacy concerns and suggest an Orwellian future, controlled by Big Business rather than Big Brother.

Keywords: biometrics, biometric identification, retinal scanning

Othello Error: Facial Profiling, Privacy and the Suppression of Dissent – Lenese C. Herbert

Ohio State Journal of Criminal Law, Vol. 5, pp. 79-129, 2007


In this article, Professor Herbert challenges the US. Transportation Security Administration’s post-September 11, 2001, use of Paul Elanan and Wallace Friesen’s Facial Action Coding System (FACS) to identify potential terrorists in American airports. Professor Herbert asserts that invasive visual examination of travelers’ faces and facial expressions for law enforcement purposes under the auspices of protective administrative searches ineffectively protects national and airport security and violates reasonable expectations of privacy. FACS improperly provides unreasonable governmental activity with a legitimizing scientific imprimatur that conceals governmental agents’ race- and ethnicity-based prejudices, which leads to targeting minorities’ faces as portents of danger. Professor Herbert assesses the concept offacial privacy in public, and in doing so, rejects the Supreme Court’s Katz v. United States test and argues in support of constitutional protection of public privacy.

Privacy by Deletion: The Need for a Global Data Deletion Principle – Benjamin J. Keele

Indiana Journal of Global Legal Studies, Vol. 16, No. 1, pp. 363-384

With global personal information flows increasing, efforts have been made to develop principles to standardize data protection regulations. However, no set of principles has yet achieved universal adoption. This note proposes a principle mandating that personal data be securely destroyed when it is no longer necessary for the purpose for which it was collected. Including a data deletion principle in future data protection standards will increase respect for individual autonomy and decrease the risk of abuse of personal data. Though data deletion is already practiced by many data controllers, including it in legal data protection mandates will further the goal of establishing an effective global data protection regime.

A Paradigm Shift in Electronic Surveillance Law – Mark Klamberg
Nordic Yearbook of Law and Information Technology, 2010


Electronic surveillance law is subject to a paradigm shift where traditional principles are reconsidered and the notion of privacy has to be reconstructed. This paradigm shift is the result of four major changes in our society with regard to 1) technology; 2) perceptions of threats, 3) interpretation of human rights and 4) ownership over telecommunications. The above-mentioned changes have created a need to reform both the tools of electronic surveillance and domestic legislation. Surveillance that was previously kept secret is now subject to public debate. The article focuses on systems of “mass surveillance” such as data retention and signal intelligence and whether these are consistent with the European Convention on Human Rights.

Keywords: Electronic Surveillance, Privacy, Signal Intelligence

New Challenges to Data Protection Study – Country Report: United States – Chris Jay Hoofnagle

European Commission Directorate-General Justice, Freedom and Security Report, May 2010


This report is one of 11 country reports produced for the “New Challenges to Data Protection” study, commissioned by the European Commission, and describes the ways in which US law addresses the challenges posed by the new social-technical-political environment.

The hallmark of the US federal approach to privacy is sectoral regulation. A panoply of statutes now regulates specific types of government and business practices, with no broadly-applicable privacy statute governing data collection, use, or disclosure. The Federal Trade Commission has encouraged self-regulation in a number of sectors, and the development of privacy-enhancing technologies. The US approach to privacy is incoherent, sectorally-based, and largely driven by outrage at particular, narrow practices. Still, several innovations from the US approach deserve attention internationally.

First, increasingly, privacy statutes create evolving standards of care, thus encouraging innovation for handling of data and avoiding the reification that can result from prescriptive, detailed regulation. For instance, the Fair Credit Reporting Act mandates an evolving “maximum possible accuracy” standard.

Second, in the direct marketing context, the US has imposed advertiser liability for violations of telemarketing, fax, and spam laws. This is a promising approach to address the use of difficult-to-identify and prosecute service providers that are responsible for illegal marketing campaigns.

Third, audit requirements for access to personal information has had a profound effect in encouraging industry and citizen policing of privacy violations. Audit logs have substantiated long-suspected privacy problems regarding “browsing” of files, and news media access to celebrities’ medical records.

Fourth, the US has briefly experimented with “data provenance,” a requirement that buyers of personal information exercise diligence to ensure against misuse of data. Data provenance responsibilities can create incentives to reduce gray and black market sales of personal information.

Finally, most federal privacy law acts as a floor of protections, allowing states to enact stronger rules. This has created a tension between state and federal governments, resulting in a leveling up of protections, because states (which tend to be more activist on privacy issues) can act where the US Congress is occupied with other issues.

Internet Jurisdiction and Data Protection Law: An International Legal Analysis – Christopher Kuner

International Journal of Law and Information Technology, 2010


Data protection law has been the subject of an increasing number of jurisdictional disputes, which have largely been driven by the ubiquity of the Internet, the interconnectedness of the global economy, and the growth of data protection law around the world in recent years. There are also an increasing number of instances where data protection law conflicts with legal obligations in other areas. Moreover, the rapid development of new computing techniques (such as so-called ‘cloud computing’) is putting even greater pressure on traditional jurisdictional theories. Jurisdictional uncertainties about data protection law have important implications, since they may dissuade individuals and companies from engaging in electronic commerce, can prove unsettling for individuals whose personal data are processed, and impose burdens on regulators. These difficulties are increased by the fact that, so far, there is no binding legal instrument of global application covering either jurisdiction on the Internet or data protection. This article examines international jurisdiction as it relates to data protection law, and specifically to instances in which jurisdiction under data protection law may be considered ‘exorbitant’, with a particular focus on rules of public international law.

Developing an Adequate Legal Framework for International Data Transfers – Christopher Kuner

REINVENTING DATA PROTECTION? S. Gutwirth, eds., pp. 263-273, Springer Science+Business Media B.V., 2009


With the EU Data Protection Directive having been in force now for nearly ten years, it is wise to examine the basic concepts and assumptions on which the Directive is based, to determine whether it is functioning properly. It is the thesis of this paper that the present EU legal framework for “adequacy” decisions for the international transfer of personal data is inadequate, in both a procedural and substantive sense, and needs reform. The framework was created for a world in which the Internet was not widely used, and in which data did not flow as easily across national borders as they do now. The present system of adequacy decisions has been grievously overloaded by the great increase in data flows in the past few years, and also drains resources that could better be used in other areas of data protection. European policymakers should take a hard look at the current adequacy system and its present failings, and reform the system in a way that more effectively protects the interests of data controllers, individuals, and data protection supervisory authorities.

Keywords: European Union, data protection, privacy, adequacy, international data transfers, global data flows, APEC, accountability

EU challenges to data protection

Under the Stockholm Programme and provisions of the Lisbon Treaty the European Commission is due to present this year:
1) Communication on a new legal framework for the protection of personal data after the entry into force of the Lisbon Treaty;
2) New comprehensive legal framework for data protection;
3) Recommendation to authorise the negotiation of a personal data protection agreement for law enforcement purposes with the USA and
4) Communication on Privacy and trust in Digital Europe: ensuring citizens’ confidence in new services.

As background to the first and second proposal the Commission has published an in-depth study: New Challenges to Data Protection Study – Final Report (pdf):

“The purpose of the study was to identify the challenges for the protection of personal data produced by current social and technical phenomena such as: the Internet; globalisation; the increasing ubiquity of personal data and personal data collection; the increasing power and capacity of computers and other data-processing devices; special new technologies such as RFID, biometrics, face (etc.) recognition, etc.; increased surveillance (and “dataveillance”); and increased uses of personal data for purposes for which they were not originally collected, in particular in relation to national security and the fight against organised crime and terrorism.”

See also:
* Working Paper No 1: The challenges to European data protection laws and principles (link);
* Working Paper 2: Data protection laws in the EU, Comparative Chart of National Laws, and Country Report on Greece.

EU lists data-sharing policies for first time

The EU and its 27 member states have almost 20 programmes, agencies and agreements governing the exchange of personal, business and telecoms data of EU citizens, a first-ever audit (COM (2010)385 final) has shown.

EU home affairs commissioner Cecilia Malmstrom, who on Tuesday  published the stocktaking list to make good on a transparency promise to MEPs, said that overview will help when developing further information  exchange policies.

She said she was committed to certain principles of proportionality and relevance.

“Citizens should have the right to know what personal data are kept and exchanged about them. One of my first actions as Commissioner for Home Affairs was, therefore, to order this overview, as called for by the European Parliament. I am happy to be able to present the overview today, together with a series of core principles for how our policy should develop in this area. This will help us keep the bigger picture in mind as we come to review the existing tools and adapt to change over time.”

The principles cover issues such as fundamental rights, proportionality and accurate risk management as well as clear allocation of responsibilities, cost effectiveness and reviewing clauses. Safeguarding people’s fundamental rights as enshrined in the EU Charter of Fundamental Rights, particularly their right to privacy and personal data protection, will be a primary concern for the Commission when developing new proposals that involve processing of personal data. In all future policy proposals, the Commission will assess the initiative’s expected impact on individuals’ rights and analyse whether it is necessary and proportionate. Compliance with the rules on personal data protection will in all cases be subject to control by an independent authority at national or EU level.

Ms Malmstrom also reviewed the EU’s counterterrorism strategy, where policies, she admitted, were often made on an “ad hoc” basis.

“The European Union has developed actions based on events, after an  attack or an attempted attack causes a huge media tension, a great fear  from the population and a pressure on the political leaders to act. But  some of these measures are maybe not as effective as they seem in the first place.”

She pledged a more “comprehensive and long-term” approach in the future. The stocktaking exercise lists the existing measures to prevent,
protect, pursue and respond to terrorist threats, underlining efforts to
fight terrorist propaganda and recruitment, measures to avoid attacks
with explosives, and prevention of chemical, biological and nuclear
threats. The communication also identifies future challenges in areas
such as radicalisation, crisis management and response.

Tuesday’s information will feed into a broader EU internal security document due to be published in autumn, with the commission noting that while terrorist attacks are on the decrease, terrorists’ methods are  constantly evolving.

Behavioral Profiling at Airports

Nature ask itself the question: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious — not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report1 from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog. H/T to Bruce Schneier.

Profiling in the European Union: A high-risk practice

New IN:EX/CEPS publication. Download it here.


Profiling through predictive data mining is already a reality worldwide, including in the European Union. This modern technique relies on the massive processing of personal data in order to identify patterns that allow for the automatic categorisation of individuals. Yet no satisfactory debate is taking place on how the use of profiling in this particular area can encroach upon the fundamental rights and freedoms of individuals, argue the authors of this INEX Policy Brief.

Brain scans misused as lie detectors

Experts meeting at a conference at the Institute of Advanced Studies in Glasgow have warned that measures are needed to stop brain scans being misused by courts, insurers and employers.

Attempts have been made to use magnetic resonance imaging scans as lie detectors or to demonstrate mental health problems in more than 90 capital punishment cases in the US, as well as in other proceedings in Europe and Asia. While they have been rejected in many cases, scan results have sometimes been accepted as evidence.

Mr Schafer, co-director of the SCRIPT Centre for Research in Intellectual Property and Technology at the University of Edinburgh’s school of law, said the UK had to consider how to prevent MRI scans being misused – and how to protect people’s privacy.

“After data mining and online profiling, brain imaging could well become the next frontier in the privacy wars. The promise to read a person’s mind is beguiling, and some applications will be greatly beneficial. But a combination of exaggerated claims by commercial providers, inadequate legal regulation and the persuasive power of images bring very real dangers for us as citizens.”

Mr Schafer added there was also a chance employers could seek to use scans to test the honesty of an individual’s CV – or by insurance companies. At least one US company is offering scans to employers recruiting staff

Joanna Wardlaw, professor of applied neuroimaging at the University of Edinburgh, said brain scans could show differences between groups who thought differently in a research setting. But she added:

“It’s very, very difficult to apply the results of an individual’s scan in situations such as where there is a threat of legal action.

EU adopts “Instrument for compiling data and information on radicalisation processes”

On 16 April the General Affairs Council of the Council of the European Union adopted the Council Conclusions “on the use of a standardised, multidimensional semi-structured instrument for collecting data and information on the processes of radicalisation in the EU.”

The Conclusions start as follows:

Following the resurgence of terrorist activities across the world in recent years, in 2005 the European Union developed a global counter-terrorism strategy which had prevention as one of the four strands of its strategic commitment. The purpose of that strand is to prevent individuals from turning into terrorists by tackling the factors and profound causes which may lead to radicalisation and recruitment both in Europe and elsewhere.
(…) In the Revised EU Radicalisation and Recruitment Action Plan – Implementation Plan, it is recommended that the Member States take steps to share information on radicalisation and put in place mechanisms to systematically analyse and assess the extent of radicalisation on the basis of a multidisciplinary approach.

The Conclusions thus propose an instrument,  called “Instrument for compiling data and information on violent radicalisation processes” (EU doc no: 7984/10 ADD 1), “as a basic mechanism for collecting data and information on violent radicalisation (VR) processes which could prove particularly useful at the information-gathering stage”.

The Instrument is to be used by Member States, EU agencies, security and intelligence agencies, and police forces. The Instrument is set up to prevent people turning to terrorism through radicalisation. Firstly by analysing the various environments where radicalisation occurs, then secondly by introducing systematic ways of exchanging information on individuals or groups who use hate speech or incite terrorism.

Tony Bunyam for Statewatch wrote a critical analysis of the Council Conclusions. Click here to read it.