For a new generation of 'digital natives' privacy is no longer a requirement. Web 2.0 has brought with it a transformation in how we view the need for privacy and engage with the public realm - but at what cost? The discussion will be prefaced by a keynote address from Daniel J. Solove, Associate Professor of law at the George Washington University Law School, and author of The Digital Person: Technology and Privacy In the Information Age. Chaired by Irish Times writer Karlin Lillington, the panel will also feature Irish blogging guru Damien Mulley and solicitor/digital rights expert Caroline Campbell.
Issues to be considered include:
* Can bloggers say what they like?
* What's wrong with having nothing to hide?
* Who is really stalking you on Facebook? .. Does anyone care anymore?
* Is there a generation gap in approaches to online privacy?
Wednesday, June 25, 2008
Symposium - Privacy v. Publicity in the Virtual World
The Darklight Film Festival is hosting what should be a very interesting symposium on Privacy v. Publicity in the Virtual World this Friday, June 27th in the Film Base, Curved Street, Temple Bar at 10am:
Monday, June 23, 2008
Civil servants' illegal disclosure of personal information is "routine and very comprehensive"
The Independent has an update on the Data Protection Commissioner's investigation into the Department of Social and Family Affairs:
FOURTEEN employees of the Department of Social and Family Affairs are being investigated for allegedly passing comprehensive personal information to insurance companies on a regular basis.I've blogged before about other examples in this Department of disregard for citizens' privacy.
The Irish Independent has learned that some of the alleged breaches -- which came to light in April 2007 -- involve "one of Ireland's largest insurance companies" and date back to 2006.
The allegations involve the passing of personal and sensitive information, contained on data systems within the Department of Social and Family Affairs (DSFA), to third parties for commercial benefit.
The DSFA carries all personal details on all individuals in the state including PPS numbers, dates of birth, addresses as well as earnings details.
Private investigators work for the insurance companies to compile cases against drivers. But there is concern about the level of information that the inspectors for the insurance companies are obtaining.
Protection Commissioner Billy Hawkes said in an email to the DSFA last June: "I inspected five investigator files yesterday during a planned call back to X (large insurance company).
"This revealed very-worrying levels of disclosure from the DSFA to private investigators. From what I could discern, such disclosures are routine and very comprehensive."
Thursday, June 19, 2008
Data protection and bulletin boards
John Breslin of (amongst other things) Boards.ie has an interesting post on a data protection complaint from a banned user. The complaint? After the banning, all the posts he had previously made appeared with the word "Banned" next to them (which is the default setting for many forum software packages). The view of the Data Protection Commissioner was that this was an unauthorised disclosure of personal information (i.e. the user's status on the site), apparently on the basis that the username was very close to his real name:

Quite apart from the narrow data protection aspect of this particular case, it raises an interesting issue about the social dynamics of social software and whether the law might hinder effective moderation.
One of the way in which moderators on forums discourage certain behaviour is by putting users into a sin bin or banning them. Going one step further by naming and shaming - i.e. publicising the sanction by labeling posts from those users - has a social effect in two ways. At a general level it may help to reinforce the norms of the site by publicly reinforcing the message that certain types of behaviour are unacceptable and at the individual level it may also act as a deterrent to the user who knows that any sanction against them will be publicised.
If this sounds familiar it's because this argument mirrors, on a much smaller scale, the role of publicity in the criminal justice system. It also mirrors the increasing tendency in other areas for public bodies to "name and shame", whether it be young offenders in England or the list of tax defaulters in Ireland who settle with the Revenue.
The broader issue this raises is whether naming and shaming is an acceptable option - and if acceptable in (e.g.) the context of tax defaulters, why not in the context of troublesome users? Should it matter whether it's a public or private body naming and shaming? Should it matter that the gravity of the "offence" is much greater in one case than the other? If bulletin boards / forums can't publicly reveal which users have been banned or sin-binned, will this make the life of moderators more difficult?
Quite apart from the narrow data protection aspect of this particular case, it raises an interesting issue about the social dynamics of social software and whether the law might hinder effective moderation.
One of the way in which moderators on forums discourage certain behaviour is by putting users into a sin bin or banning them. Going one step further by naming and shaming - i.e. publicising the sanction by labeling posts from those users - has a social effect in two ways. At a general level it may help to reinforce the norms of the site by publicly reinforcing the message that certain types of behaviour are unacceptable and at the individual level it may also act as a deterrent to the user who knows that any sanction against them will be publicised.
If this sounds familiar it's because this argument mirrors, on a much smaller scale, the role of publicity in the criminal justice system. It also mirrors the increasing tendency in other areas for public bodies to "name and shame", whether it be young offenders in England or the list of tax defaulters in Ireland who settle with the Revenue.
The broader issue this raises is whether naming and shaming is an acceptable option - and if acceptable in (e.g.) the context of tax defaulters, why not in the context of troublesome users? Should it matter whether it's a public or private body naming and shaming? Should it matter that the gravity of the "offence" is much greater in one case than the other? If bulletin boards / forums can't publicly reveal which users have been banned or sin-binned, will this make the life of moderators more difficult?
Tuesday, June 10, 2008
How not to protect a domain name - the D4hotels saga
Remember D4hotels.com - the low cost hotels site which completely failed to protect variants of its name against cybersquatters? Well it now transpires that the ownership of D4hotels.com itself is now contested:
Update (27.1.09): It now seems that this case has been settled.
A dispute over ownership of the D4hotels.com domain name and website has come before the Commercial Court.While there's very little detail in this report, it suggests that there was no explicit agreement as to ownership of the intellectual property in the domain name and the site itself - which if true is one of the most fundamental mistakes one can make when establishing an online business. This, together with the failure to protect domain name variants, means that I will be using this case in class as a cautionary tale.
MJBCH Ltd, the leaseholder of the former Berkeley Court Hotel and the former Jury's hotels in Ballsbridge and The Towers, claims exclusive entitlement to the operation and management of the domain name and website.
It has alleged it had a hotel operation and management agreement with the two defendant companies -- Cloud Nine Management Services Ltd and Beechside Company Ltd, trading as The Park Hotel, Kenmare -- to manage the hotels as the Ballsbridge Inn, Ballsbridge Towers and the Ballsbridge Court hotel, but that agreement was terminated in February.
In those circumstances, it claims the defendants have no entitlement to use the d4 domain name and website.
...
The defendant companies deny the claims and say they at no time abandoned their rights to or property in the domain name, website or business name.
They companies say that, under their agreement with MJBCH of October 2007, they were authorised to act as the exclusive operator and manager of the hotels and that the domain name D4hotels.com was registered by Beechside in September 2007.
They also say the management agreement was summarily terminated by MJBCH in February and that at no stage had it been agreed the D4 domain name and website would become the property of MJBCH.
Update (27.1.09): It now seems that this case has been settled.
NY Attorney General forces ISPs to filter Internet
In another bad day for the end to end principle, the New York Times reports that the Attorney General of New York has succeeded in forcing ISPs to filter their users' internet connections. The expressed motivation is to prevent users from accessing child pornography, though this will be trivially easy to circumvent. There are many problems with internet filtering, and I've written a short summary of them (in a different context) for the Digital Rights Ireland blog. But the New York scenario raises one particular problem - whether this form of censorship, implemented and administered by private actors (who will face an incentive to overblock), can be reconciled with the rule of law. The issues raised are very similar to those presented by the UK Cleanfeed system, about which Colin Scott and myself had this to say at the inaugural TELOS Conference last year:
Edit (13.06.08): Richard Clayton indicates that the New York Times coverage may be inaccurate. He suggests that what the ISPs have agreed to is limited to removing certain newsgroups and taking down sites which they host - but does not include filtering of sites hosted elsewhere. There's also some confusion as to just what the effect on usenet will be, with Declan McCullagh reporting that in the case of Verizon all the newsgroups in the alt.* hierarchy will no longer be offered.
This presents a number of challenges for the rule of law. Even if an individual ISP’s actions can be described as voluntary, the effect is to subject users without their consent to a state mandated regime of internet filtering of which they may be unaware. The Internet Watch Foundation (IWF), which determines which URLs should be blocked, has a curious legal status, being a charitable incorporated body, funded by the EU and the internet industry, but working closely with the Home Office, the Ministry of Justice, the Association of Chief Police Officers and the Crown Prosecution Service. There is no provision for site owners to be notified that their sites have been blocked. While there is an internal system of appeal against the designation of a URL to be blocked, that mechanism does not provide for any appeal to a court – instead, the IWF will make a final determination on the legality of material in consultation with a specialist unit of the Metropolitan Police.Orin Kerr has more.
Consequently the effect of the UK policy is to put in place a system of censorship of internet content, without any legislative underpinning, which would appear (by virtue of the private nature of the actors) to be effectively insulated from judicial review. Though the take-up of the regime may be attributable to the steering actions of government, the way in which the regime is implemented and administered complies neither with the process or transparency expectations which would attach to legal instruments.
There is also cause for concern about the incentives which delegating filtering to intermediaries might create. From the point of view of the regulator, requiring intermediaries to filter may allow them to externalise the costs associated with monitoring and blocking, perhaps resulting in undesirably high levels of censorship. But perhaps more worrying are the incentives which filtering creates for intermediaries. Kreimer has argued that by targeting online intermediaries regulators can recruit “proxy censors”, whose “dominant incentive is to protect themselves from sanctions, rather than to protect the target from censorship”. As a result, there may be little incentive for intermediaries to engage in the costly tasks of distinguishing protected speech from illegal speech, or to carefully tailor their filtering to avoid collateral damage to unrelated content. Kreimer cites the US litigation in Centre for Democracy & Technology v. Pappert to illustrate this point. In that case more than 1,190,000 innocent web sites were blocked by ISPs even though they had been required to block fewer than 400 child pornography web sites.
Edit (13.06.08): Richard Clayton indicates that the New York Times coverage may be inaccurate. He suggests that what the ISPs have agreed to is limited to removing certain newsgroups and taking down sites which they host - but does not include filtering of sites hosted elsewhere. There's also some confusion as to just what the effect on usenet will be, with Declan McCullagh reporting that in the case of Verizon all the newsgroups in the alt.* hierarchy will no longer be offered.
Sunday, June 08, 2008
The Future of the Internet and How to Stop It
Jonathan Zittrain's superb new book The Future of the Internet and How to Stop It is now available for free download. His central theme is that the freedom associated with general purpose PCs and an end-to-end internet is increasingly being threatened - a variety of forces (including a push by the content industry for DRM, security fears, and state regulation) are leading towards a growth in "tethered appliances" outside the control of their users, coupled with increased internet filtering and gatekeeping. The result is to dramatically shift the balance struck by the law and possibly to threaten traditional freedoms. From the synopsis:
IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that can’t be easily modified by anyone except their vendors or selected partners. These “tethered appliances” have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly touted—but their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internet—its “generativity,” or innovative character—is at risk.A must read.
Tuesday, May 27, 2008
Deutsche Telekom used call data to spy on reporters
From the New York Times:
Germany was engulfed in a national furor over threats to privacy on Monday, after an admission by Deutsche Telekom that it had surreptitiously tracked thousands of phone calls to identify the source of leaks to the news media about its internal affairs.Spiegel Online has more:
In a case that echoes the corporate spying scandal at Hewlett-Packard, Deutsche Telekom said there had been “severe and far-reaching” misuse of private data involving contacts between board members and reporters...
The company itself, led by then CEO Kai-Uwe Ricke and monitored by a supervisory board headed up by then Deutsche Post CEO Klaus Zumwinkel, (more...) is accused of being behind the alleged spying. And the Berlin consulting firm, whose chief executive sent the April 28 fax, was hired to carry it out. The goal of the "Clipper" and "Rheingold" surveillance programs, as well as other "secondary projects," the fax makes clear, was to "analyze several hundred thousand landline and mobile connection data sets of key German journalists reporting on Telekom and their private contacts."
But that wasn't all. The same procedure, according to the memo, was repeated with "several supervisory board members on the employee side" -- "for a total period of one-and-a-half years.
Monday, April 07, 2008
Data Protection and Search Engines - The Article 29 Working Party Weighs In
The Article 29 Working Party has issued its long-awaited Opinion on Data Protection Issues Related to Search Engines. This is a substantial document and will need close consideration, but some highlights spring out and are worth excerpting.
The WP confirms that the Data Retention Directive (contrary to what has been claimed by some) does not apply to search engines:
The WP confirms that the Data Retention Directive (contrary to what has been claimed by some) does not apply to search engines:
Search engine services in the strict sense do not in general fall under the scope of the new regulatory framework for electronic communications of which the ePrivacy Directive is part. Article 2 sub c of the Framework Directive (2002/21/EC), which contains some of the general definitions for the regulatory framework, explicitly excludes services providing or exercising editorial control over content:Consent cannot be implied in the case of anonymous users:"Electronic communications service" means a service normally provided for remuneration which consists wholly or mainly in the conveyance of signals on electronic communications networks, including telecommunications services and transmission services in networks used for broadcasting, but exclude services providing, or exercising editorial control over, content transmitted using electronic communications networks and services; it does not include information society services, as defined in Article 1 of Directive 98/34/EC, which do not consist wholly or mainly in the conveyance of signals on electronic communications networks;Search engines therefore fall outside of the scope of the definition of electronic communication services.
A search engine provider can however offer an additional service that falls under the scope of an electronic communications service such as a publicly accessible email service which would be subject to ePrivacy Directive 2002/58/EC and Data Retention
Directive 2006/24/EC.
Article 5(2) of the Data Retention Directive specifically states that “No data revealing the content of the communication may be retained pursuant to this Directive”. Search queries themselves would be considered content rather than traffic data and the Directive would therefore not justify their retention. Consequently, any reference to the Data Retention Directive in connection with the storage of server logs generated through the offering of a search engine service is not justified.
Consent cannot be construed for anonymous users of the service and the personal data collected from users who have not chosen to authenticate themselves voluntarily. These data may not be processed or stored for any other purpose than acting upon a specific request with a list of search results.The "necessary for the performance of a contract" exception will seldom be available:
Processing may also be necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract. This legal basis may be used by search engines to collect personal data that a user voluntarily provides in order to sign-up for a certain service, such as a user account. This basis may also be used, similar to consent, to process certain well-specified categories of personal data for well-specified legitimate purposes from authenticated users. Many internet companies also argue that a user enters into a de facto contractual relationship when using services offered on their website, such as a search form. However, such a general assumption does not meet the strict limitation of necessity as required in the Directive.Personalised advertising raises particular problems:
Search engine providers that wish to provide personalised advertising in order to increase their revenues, may find a ground for the legitimate processing of some personal data in Article 7 (a) of the Directive (consent) or Article 7 (b) of the Directive (performance of a contract) but it is difficult to find a legitimate ground for this practice for users who have not specifically signed in based on specific information about the purpose of the processing. The Working Party has a clear preference for anonymised data.Search engines may not store information purely on the basis that it may be useful in later criminal proceedings:
Law enforcement authorities may sometimes request user data from search engines in order to detect or prevent crime. Private parties may also try to obtain a court order addressing a search engine provider to hand over user data. When such requests follow valid legal procedures and result in valid legal orders, of course search engine providers will need to comply with them and supply the information that is necessary. However, this compliance should not be mistaken for a legal obligation or justification for storing such data solely for these purposes. Moreover, large amounts of personal data in the hands of search engine providers may encourage law enforcement authorities and others to exercise their rights more often and more intensely which in turn might lead to loss of consumer confidence.A maximum retention period of six months is permissible, and users must be informed in advance:
In practice, the major search engines retain data about their users in personally identifiable form for over a year (precise terms vary). The Working Party welcomes the recent reductions in retention periods of personal data by major search engine providers. However, the fact that leading companies in the field have been able to reduce their retention periods suggests that the previous terms were longer than necessary. In view of the initial explanations given by search engine providers on the possible purposes for collecting personal data, the Working Party does not see a basis for a retention period beyond 6 months...Update: Lilian Edwards has more on the Opinion, including the problems it poses for people search services.
In case search engine providers retain personal data longer than 6 months, they will have to demonstrate comprehensively that it is strictly necessary for the service. In all cases search engine providers must inform users about the applicable retention policies for all kinds of user data they process.
Thursday, April 03, 2008
Filter or Else! Music Industry Sues Irish ISP
I've written a short update for the Society for Computers and Law on the music industry litigation against Eircom. Excerpt:
The music industry in Ireland started its campaign against peer-to-peer downloading and uploading in 2003/2004 when it started an education and awareness campaign. That campaign included national advertising aimed at end-users and specific warnings addressed to intermediaries such as companies and universities, as well as instant messages sent to users who were uploading particular songs.The full text is available on the SCL site (no subscription required).
In 2005 the music industry changed tack and brought the first action before the Irish courts (EMI and ors. v Eircom and ors. [2005] 4 IR 148) seeking to identify 17 individuals alleged to be illegally file-sharing. In that case the High Court granted disclosure of these identities under the Norwich Pharmacal [1974] AC 133 jurisdiction. Two further applications were made to the High Court in 2006 and 2007, identifying some 99 users in all. However, despite the significant publicity which these actions received, they do not appear to have any more than a short-term effect in deterring Irish users from sharing music.
At this point, and in line with the strategies pursued by the industry body IFPI elsewhere, the music industry in Ireland appears to have decided to shift the focus of its attention from the end-user towards the intermediary, and in particular towards seeking to compel ISPs to police the behaviour of their users.
Wednesday, March 26, 2008
A public service announcement about public surveillance
This animated short by David Scharf is one of the best explanations I've seen as to why we should be worried about sleepwalking into a surveillance society, not to mention a beautifully crafted piece of visual art in its own right.
You can see larger, better quality versions of the video at http://www.huesforalice.com/bbs/.
You can see larger, better quality versions of the video at http://www.huesforalice.com/bbs/.
Tuesday, March 04, 2008
Domain Name Registrars - The New Points of Control?
Jonathan Zittrain has pointed out that regulation of the internet has tended to proceed - whether by way of litigation or legislation - by identifying particular intermediaries and compelling them to act as points of control over user behaviour. The intermediaries targeted have included hosts, ISPs, search engines, hyperlinkers and financial intermediaries (which have been compelled, for example, to stop credit card payments to gambling sites). Some relatively recent developments suggest that domain name registrars are joining them in the firing line - and that this may result in some interesting cross-border legal issues.
An early example took place in the Rate Your Solicitor saga, where the plaintiff in an Irish defamation action succeeded in 2006 in persuading US registrar Godaddy to disable the rateyoursolicitor.com domain (apparently for false WHOIS data) notwithstanding that Godaddy would appear to have enjoyed immunity under section 230 CDA. (Not that this deterred the critics of Irish lawyers, who promptly moved to rate-your-solicitor.com where they remain today.)
At around the same time, the plaintiffs in the Spamhaus litigation set out to persuade an Illinois court to order ICANN (rather than the Canadian registrar!) to suspend the Spamhaus domain name - on the basis that Spamhaus (located in the UK) could not otherwise be made to comply with that court's order. (Ultimately, however, the court accepted that ICANN and the registrar were not involved in the defendant's actions nor able to control them, and consequently an order should not be directed towards them.)
The Spamhaus case didn't, however, deter the lawyers acting for Bank Julius Baer in its attempt to silence Wikileaks.org, who succeeded (albeit temporarily) last month in persuading the Californian courts to issue an interim order requiring the registrar (Dynadot) to disable the Wikileaks.org domain name and remove all DNS hosting records. (This despite the lack of any obvious role for the Californian courts in adjudicating on a dispute between a Cayman Islands bank, its Swiss parent company, a Swiss former employee, and the various individuals around the world responsible for Wikileaks, and despite the lack of any full hearing.) Daithi has a particularly good post on why this amounted, in effect, to an internet death penalty and was a disproportionate prior restraint on speech.
Now the New York Times reports that the US government has ordered domain name registrars to disable domain names which it alleges breach its ban on trade with Cuba:
An early example took place in the Rate Your Solicitor saga, where the plaintiff in an Irish defamation action succeeded in 2006 in persuading US registrar Godaddy to disable the rateyoursolicitor.com domain (apparently for false WHOIS data) notwithstanding that Godaddy would appear to have enjoyed immunity under section 230 CDA. (Not that this deterred the critics of Irish lawyers, who promptly moved to rate-your-solicitor.com where they remain today.)
At around the same time, the plaintiffs in the Spamhaus litigation set out to persuade an Illinois court to order ICANN (rather than the Canadian registrar!) to suspend the Spamhaus domain name - on the basis that Spamhaus (located in the UK) could not otherwise be made to comply with that court's order. (Ultimately, however, the court accepted that ICANN and the registrar were not involved in the defendant's actions nor able to control them, and consequently an order should not be directed towards them.)
The Spamhaus case didn't, however, deter the lawyers acting for Bank Julius Baer in its attempt to silence Wikileaks.org, who succeeded (albeit temporarily) last month in persuading the Californian courts to issue an interim order requiring the registrar (Dynadot) to disable the Wikileaks.org domain name and remove all DNS hosting records. (This despite the lack of any obvious role for the Californian courts in adjudicating on a dispute between a Cayman Islands bank, its Swiss parent company, a Swiss former employee, and the various individuals around the world responsible for Wikileaks, and despite the lack of any full hearing.) Daithi has a particularly good post on why this amounted, in effect, to an internet death penalty and was a disproportionate prior restraint on speech.
Now the New York Times reports that the US government has ordered domain name registrars to disable domain names which it alleges breach its ban on trade with Cuba:
Steve Marshall is an English travel agent. He lives in Spain, and he sells trips to Europeans who want to go to sunny places, including Cuba. In October, about 80 of his Web sites stopped working, thanks to the United States government.What's the significance of this? As in some of the other cases, it means that internet speech may be shut down without any prior notice to a party, and without any hearing. It also means that disputes which have no underlying connection with a particular jurisdiction may end up subject to the law of that jurisdiction:
The sites, in English, French and Spanish, had been online since 1998. Some, like www.cuba-hemingway.com, were literary. Others, like www.cuba-havanacity.com, discussed Cuban history and culture. Still others — www.ciaocuba.com and www.bonjourcuba.com — were purely commercial sites aimed at Italian and French tourists.
“I came to work in the morning, and we had no reservations at all,” Mr. Marshall said on the phone from the Canary Islands. “We thought it was a technical problem.”
It turned out, though, that Mr. Marshall’s Web sites had been put on a Treasury Department blacklist and, as a consequence, his American domain name registrar, eNom Inc., had disabled them. Mr. Marshall said eNom told him it did so after a call from the Treasury Department; the company, based in Bellevue, Wash., says it learned that the sites were on the blacklist through a blog.
Either way, there is no dispute that eNom shut down Mr. Marshall’s sites without notifying him and has refused to release the domain names to him. In effect, Mr. Marshall said, eNom has taken his property and interfered with his business. He has slowly rebuilt his Web business over the last several months, and now many of the same sites operate with the suffix .net rather than .com, through a European registrar. His servers, he said, have been in the Bahamas all along.
Susan Crawford, a visiting law professor at Yale and a leading authority on Internet law, said the fact that many large domain name registrars are based in the United States gives the Treasury’s Office of Foreign Assets Control, or OFAC, control "over a great deal of speech — none of which may be actually hosted in the U.S., about the U.S. or conflicting with any U.S. rights."There's also a very important practical point here. Website owners are already acutely aware that hosting liability varies from jurisdiction to jurisdiction - and for that reason many chose to host in the US where section 230 CDA makes it less likely that a host will take down a site based on vague and unjustified threats. These cases illustrate that domain owners should be equally cautious in deciding which registrar to use - pick a registrar located in the wrong jurisdiction, or one which (as Dynadot appeared to do in the Wikileaks case) caves in too easily and you may find your domain name vanishes.
"OFAC apparently has the power to order that this speech disappear," Professor Crawford said.
Friday, February 29, 2008
German Constitutional Court recognises a new right of "Confidentiality and Integrity of Computer Systems"
On 27 February the German Constitutional Court issued what's being described as a landmark ruling which recognises a new fundamental right of privacy, confidentiality and integrity in computer systems. The case was brought to challenge a law which, amongst other things, permitted government agencies to hack into computer systems, for example by using a Trojan Horse to monitor suspects' internet use. The reasoning of the Court was based on its finding that computer systems will often contain information presenting a complete picture of a person's most private life:
[Computer systems] alone or in their technical interconnectedness can contain personal data of the affected person in a scope and multiplicity such that access to the system makes it possible to get insight into relevant parts of the conduct of life of a person or even gather a meaningful picture of the personality.Ralf Bendrath has detailed analysis of the decision and its background. Meanwhile, the IPKat suggests that this may have implications for the use of privacy invasive DRM and for disclosure of information held by ISPs in civil cases.
Thursday, February 28, 2008
'Cause I'm the Taxman: Facebook and the Revenue
Now my advice for those who die,
Declare the pennies on your eyes.
'Cause I’m the taxman,
Yeah, I’m the taxman. - The Beatles
There's been a good deal of media coverage of the revelation by Evert Bopp that the Revenue is gathering information from Facebook and other social networking sites as part of its audits of individuals. There has been a tendency to present this as a privacy issue, leading to discussion of whether information on social networking sites should be treated as essentially in the public domain. This seems to me, however, to be the wrong way of looking at this question, not least because a definition of privacy remains elusive. Leaving privacy per se aside, are there other reasons why this sort of material should not be used?
There are, for me, at least two reasons. First, this material is often unreliable. As one Irish blogger demonstrated recently, it's quite easy to fake profiles in the name of others and to do so in a convincing way (Google cache). Consequently government agencies should be slow to use information derived in this way. Where they do so they should inform the individual concerned and offer an opportunity for that person to correct or challenge the material. (Something which would in any event be required by the Data Protection Rules.)
Secondly, and perhaps more importantly, this may lead to irrelevant criteria being used in a way which harms individuals. The legitimacy of bureaucracy is based, at least in part, on the impersonal application of general rules. Bureaucrats are not allowed to take other factors - such as the sexual orientation of the individual - into account, and indeed are expressly prohibited from inquiring about these factors. But where social networking profiles are being searched, it is likely that this principle may be undermined. For example, suppose that Blogger X is openly out on their blog. That is no business of the Revenue (for example) in dealing with him. But if an official is influenced by their search, we may find him being discriminated against in a way which would not have been likely otherwise.
Daniel Solove has considered some of the issues arising from what he describes as the "self exposure problem" in his fascinating new book The Future of Reputation: Gossip, Rumor and Privacy on the Internet - the full text of which is now available online under a non-commercial CC licence. It's required reading for anyone interested in this area.
Declare the pennies on your eyes.
'Cause I’m the taxman,
Yeah, I’m the taxman. - The Beatles
There's been a good deal of media coverage of the revelation by Evert Bopp that the Revenue is gathering information from Facebook and other social networking sites as part of its audits of individuals. There has been a tendency to present this as a privacy issue, leading to discussion of whether information on social networking sites should be treated as essentially in the public domain. This seems to me, however, to be the wrong way of looking at this question, not least because a definition of privacy remains elusive. Leaving privacy per se aside, are there other reasons why this sort of material should not be used?
There are, for me, at least two reasons. First, this material is often unreliable. As one Irish blogger demonstrated recently, it's quite easy to fake profiles in the name of others and to do so in a convincing way (Google cache). Consequently government agencies should be slow to use information derived in this way. Where they do so they should inform the individual concerned and offer an opportunity for that person to correct or challenge the material. (Something which would in any event be required by the Data Protection Rules.)
Secondly, and perhaps more importantly, this may lead to irrelevant criteria being used in a way which harms individuals. The legitimacy of bureaucracy is based, at least in part, on the impersonal application of general rules. Bureaucrats are not allowed to take other factors - such as the sexual orientation of the individual - into account, and indeed are expressly prohibited from inquiring about these factors. But where social networking profiles are being searched, it is likely that this principle may be undermined. For example, suppose that Blogger X is openly out on their blog. That is no business of the Revenue (for example) in dealing with him. But if an official is influenced by their search, we may find him being discriminated against in a way which would not have been likely otherwise.
Daniel Solove has considered some of the issues arising from what he describes as the "self exposure problem" in his fascinating new book The Future of Reputation: Gossip, Rumor and Privacy on the Internet - the full text of which is now available online under a non-commercial CC licence. It's required reading for anyone interested in this area.
Wednesday, February 27, 2008
An overview of ISP Voluntary / Mandatory Filtering
Irene Graham of Electronic Frontiers Australia has compiled an invaluable overview of ISP level filtering systems as part of the EFA campaign against mandatory filtering in Australia. What's most striking about her survey is that unlike much previous work which focused on countries such as China or Saudi Arabia, she looks at the systems put in place in various democracies (including Canada, the United Kingdom and Finland) but still finds the same problems - a lack of democratic legitimacy, opaque systems, overblocking, and indications of function creep.
Full Disclosure and the Law - a European Survey
Full disclosure - the practice of making security vulnerabilities public - is an area of uncertain legality. The companies whose products are shown to be insecure would like to suppress this information. In addition, new laws criminalising so-called hacking tools have caused security researchers to worry that simply possessing the tools of their trade or publishing their research may expose them to criminal liability. Legal certainty isn't helped by the fact that the laws on this point differ greatly from jurisdiction to jurisdiction. Federico Biancuzzi has now produced a very helpful survey of European laws in this area by interviewing lawyers (including myself) from twelve EU countries on their national laws. Most seem to agree that the law is unsettled. But some common themes do emerge. In particular, full disclosure is not being regulated by any specific law - instead, the consequences of full disclosure tend to be considered in a rather ad hoc way under a variety of different legal regimes. In addition, civil liability (imposed by general copyright law or by specific contractual or licensing restrictions) appears to be just as much a deterrent to research and publication as newer laws criminalising hacking tools.
Wednesday, February 13, 2008
Sabam v. Tiscali (Scarlet) - English translation now available
The recent Belgian decision in SABAM v. Tiscali (Scarlet) appears to be the first time in Europe a court has considered whether ISPs can be required to monitor or filter the activities of their users in order to stop filesharing on peer to peer networks. The Cardozo Arts & Entertainment Law Journal has now provided an English translation of the decision. The decision deserves to be read in full, but here are some of the most important passages:
the issue of future potential encryption cannot today be an obstacle to injunctive measures since this one is currently and technically possible and capable of producing a result, as it is in the case before this court; that the internet sector is constantly evolving; that in crafting injunctive relief, the judge cannot consider speculations about potential future technical developments, especially if these might also be subject to parallel adaptations concerning blocking and filtering measures
the average cost of implementing these measures does not appear excessive; that, according to the expert, this estimated cost over a 3 year period (the time of amortization) and on the basis of the number of users on the order of 150,000 persons should not exceed 0.5 each month for each user
these measures could also have as secondary consequence to block certain authorized exchanges; that this circumstance that an injunctive measure affects a group of information [exchanges], of which some are not infringing (such as film, book, CD. . ..) does not prevent, nevertheless, it [the court] from enforcing the injunction
SA Scarlet Extended disputes, nonetheless, this court’s power to order an injunction by arguing that:
* the technical measures requested would lead to impose upon it [Scarlet] a general monitoring obligation for the totality of all “peer-to-peer” traffic, which would constitute an on-going obligation contrary to the legislation on electronic commerce (Directive 2000/31 ...,
* the installation of filtering measures may lead to the loss of the safe harbor from liability for mere conduit activities that technical intermediaries enjoy by virtue of Article 12 of Directive 2000/31,
* the technical measures requested in so far as they lead to “installing in a permanent and systematic way listening devices” will violate fundamental rights and, in particular, the rights to privacy, confidentiality of correspondence, and freedom of expression;
Directive 2000/31 of 8 June 2000, related to certain legal aspects of information society services, and in particular electronic commerce in the internal market, states, in its Article15, that “. . .Member states shall not impose a general obligation on providers . . . to monitor the information which they transmit or store” ...
Article 15, which is part of Section 4 of the Directive related to “Liability of intermediary service providers,” aims to prevent a national judge from imposing liability for breach by the service provider of a general monitoring obligation due only to the presence on its networks of illegal material ... this provision that thus governs the issue of provider liability is, however, exclusively addressed to the judge of liability and has no impact on the present litigation since injunctive relief does not require any prior finding of negligence by the intermediary
Scarlet wrongfully considers that this injunction would result in its loss of the safe harbor from liability contained in Article 12 of Directive 2000/31 ... that benefits a provider of mere conduit or access to the internet conditioned upon it neither selecting nor modifying the information being transmitted;
That in accordance with “whereas” clause 45 of Directive 2000/31, “the limitations of the liability of intermediary service providers established in this Directive do not affect the possibility of injunctions of different kinds; such injunctions can in particular consist of orders by court . . . requiring the termination or prevention of any infringement, including the removal of illegal information or the disabling of access to it.”
That the only fact that the filtering technical instrument would not filter some infringing works belonging to the SABAM repertoire does not imply in any way that those works would have been selected by Scarlet; that indeed the fact that one does not succeed in blocking some content does not imply that this content has been selected by the intermediary as long as this intermediary does not target the information to be provided to his clients; the filtering measure is purely technical and automatic, the intermediary having no role in the filtering;
That, furthermore, even assuming that Scarlet would lose the benefit exemption of liability, it does not necessarily follow that it would be found liable; it would still have to be proven that it was negligent; that such litigation would nevertheless fall within the sole competence of a judge of liability;
filtering and blocking software applications do not as such process any personal information; that, like anti-virus or anti-spam software, they are simple technical instruments which today do not involve any activity implicating identification of internet user
Friday, February 08, 2008
Government databases - Why "the innocent have nothing to fear" simply isn't true
The Times has a very sad story:
A pensioner was killed after a couple used a policeman friend to trace him and then attacked his home in a dispute over a supermarket parking space, a jury was told yesterday.Samizdata puts it well: "The innocent have nothing to fear - so long as they have not annoyed anyone who knows a copper who can be persuaded to look up an address."
Bernard Gilbert, 79, died of a heart attack after a brick was thrown through his window.
The former Rolls-Royce worker became a target when he shouted at Zoe Forbes, 26, because she parked her car in a space he had earmarked for himself at a branch of Asda, Nottingham Crown Court was told.
Mrs Forbes was upset and called her husband Mark, who told her to note down Mr Gilbert’s numberplate. He then asked a policeman friend to check Mr Gilbert’s address on the police national computer, using the car registration number.
Mr Forbes sent his wife a text message reading: “We’ll smash his car to bits and then his hire car and then whatever he gets after that until he dies.”
The couple deny manslaughter.
Wednesday, January 30, 2008
Leaked documents show UK government plans to "coerce" take up of "voluntary" ID Cards
Details have trickled out during the last week or so of the UK Government's plans to compel people to use what has been promised to be a "voluntary" ID card. These have been based on leaked government documents. The NO2ID campaign has now published a full version of the most important document, with its own annotations. This is available here (locally hosted copy). One of the most important passages is this:
Various forms of coercion, such as designation of the application process for identity documents issued by UK ministers (eg passports) are an option to stimulate applications in a manageable way.The Register has an insightful analysis:
There are advantages to designation of documents associated with particular target groups, eg young people who may be applying for their first driving licence.
"Various forms of coercion" could be used to accelerate the rollout of ID cards, the idea being that ID cards will remain 'voluntary' for as long as possible, while not having an ID card will become more and more uncomfortable. This, precisely what the government has intended to do all along, is stated baldly in an Identity & Passport Service leak cited by the Sunday People.
The IPS gives designation of a document under the ID Cards Act as an example of "coercion", and suggests driving licence applications as an area where this approach could be used. Effectively, this would mean that new applicants for licences would be forced to get an ID card...
'Coercion' could therefore be applied here via the delivery of a speedier service online with the aid of a digital passport or ID card, or (heavy coercion) by abandoning the post office end of the service for 'reasons of security.' Similarly, speed of processing can and has been used to illustrate how ID cards can 'help' people working with children and vulnerable groups get their CRB check processed faster. Next stop, compulsory ID cards for teachers? But as it won't be "universal compulsion", they're still not compulsory, right?
Tuesday, January 29, 2008
Data retention - "The innocent have nothing to fear" edition
The Economic Times of India has this worrying report:
MUMBAI: The wrongful arrest and the 50-day incarceration of an innocent software professional on charges he uploaded offensive pictures of Shivaji on Orkut were probably the result of a wrong internet timestamp and has raised concern over the over-dependence of police on Internet Protocol (IP) addresses as evidence in online crime, cyber experts said.
A couple of months ago, Lakshmana Kailash K was arrested, denied bail and given a taste of harsh prison life at Yerawada as the IP details given to police by his internet service provider, Bharti Airtel, matched his user identity. It later emerged that they had the wrong man. Police confirmed the faux pas and Mr Kailash was released. Now, the professional has sent legal notices to Bharti, police and government officials claiming damages for the agony he went through.
Sunil Phulari, the DCP with the cyber crime cell Pune said: "Nothing went wrong in the investigation. It was carried out according to the legal procedures. I cannot speak for Airtel."
Subscribe to:
Posts (Atom)