Some Things about the Internet of Things

Just as the public begins to understand that the compromise of privacy is the currency  of today’s web commerce, along comes another category of consumer devices that extends the consumer surveillance business model from our keyboards into our living rooms. Smart appliances and home assistants are now numerous among us, described in advertising as subservient and amiable little partners to help families cope with the needs of everyday life.

This new category of domestic surveillance devices is known as “the Internet of Things”. This second front in the commercialization of consumer information as a marketable commodity presents a fresh challenge to digital privacy and the 4th Amendment.

The Internet of Things has just two critical components – the Internet and the Things. The “Thing” is a device with a thousand faces, ready to do the customer’s bidding while also doing the bidding of its manufacturer. The “Internet” is the digital link by which the “Thing” contacts a corporate computer server over the customer’s Wi-Fi to relay all that it is gathering about the consumer into a much larger digital storehouse that combines each household’s “Thing data” with all other households’ “Thing data”. The aggregate of all this data gathered from within the walls of our homes becomes corporate consumer marketing intelligence obtained through a dubiously legal, pseudo-consensual collection of domestic surveillance.

These private sector surveillance technologies lead us into uncharted waters in which novel opportunities for law enforcement overreach are barely submerged.

A recent article from The Guardiani reported that iRobot, a consumer robotics company, may begin selling the floor plans of customers’ homes derived from the movement data of the company’s Roomba robotic vacuum cleaner. The company’s CEO advised the reporter that some Roomba models generate a digital map of the floor plan of its customers’ homes. Such a detailed mapping capability has real commercial value, since iRobot’s data buyers would be eager to know that a Roomba consumer has a dinner table that seats eight, but owns only four chairs. The undesirable consequence of a robot vacuum repeatedly moving through one’s home is that while it is collecting dirt, it is also collecting dirt on you.

It is a lot to keep track of for a little robot, but luckily, its forever home has a strong Wi- Fi signal that allows the Roomba to pass along all that measurement data and the customer’s floor plan to iRobot’s corporate servers. To do so, it uses laser sensors, short-range infrared, and a camera with a cockroach’s view of the home. Raw data from these components is organized by device software into something termed “simultaneous localization and mapping“. This technology is known by its acronym, “SLAM”, drawn, no doubt, from the acronym rich labeling environment of the U.S. military, where iRobot cut its corporate baby teeth making battlefield robotsii.

It took only a few days of viral news coverage of this creative marketing idea from iRobot for public & tech media outcry to produce a correction.iii The same executive issued a statement claiming that the company was misunderstood and will never sell the Roomba location mapping to third parties like Apple, Google, or Amazoniv, but only give the data to the companies with the consent of the customer. No sales, however, does not mean no law enforcement access, if floor mapping surveillance data is just another business record.

Police are domestic data consumers too. Once a map of your living room is described as a business record, a Roomba robot vacuum cleaner becomes quite the snitch. Many of iRobot’s twenty millionv  Roomba floor cleaners are gathering and updating data about every interior detail of each of their owner’s homes, how it is furnished, the distance from the sofa to the hallway, and the shape and location of all objects located in the interior floor space. Any gunstocks of firearms leaning against a wall? A stack of cash under the bed? Where does the big dog like to nap? The answers are all Roomba business records for law enforcement, all good Intel for when a no-knock entry is the order of the day. Police could obtain access to this stream of data in real time, to be sure that the suspicious backpack under the dining room table that Roomba keeps running into doesn’t move. The “Internet of Things” is more aptly named the “Internet of Things that Search Homes”.

But if “my home is my castle”, how can iRobot legally search my home? If this robotic mapping device had “iPolice” inscribed on top of it instead of iRobot, it would require a search warrant. A device which maps the layout and contents of a home is conducting a search. But if the third party doctrine holds sway, the data collected by Roomba is a product of the business relationship between the customer and a third party service provider. When promiscuous surveillance of the customer becomes our default relationship with consumer technologies, it is time for a re-examination of the standard assumptions at the root of the third party doctrine.

“Internet of Things” surveillance, and other similar personal data collection schemes in the “Internet of Websites”, may allow 4th Amendment advocates unexpected openings to set new limits on the third party doctrine’s applicability for the data drawn from the web and digital home appliance platforms. A business enterprise that profits on the surveillance byproducts of its interaction with its customers presents a historically unorthodox way for a third party to conduct itself. It is time to differentiate the basic premise of the third party doctrine from that of this new corporate surveillance business model.

Every law student knows that the third party doctrine was born in a pile of Mitch Miller’s records at his local bank. In Unites States v. Miller, the Supreme Court’s judgment was that Mr. Miller’s sacrifice of any 4th amendment protections for his personal financial records was by his own Choosing to “bank” required a surrender of the customer’s private financial information because the bank’s use and control of those records was essential to the performance of banking services. Providing banking services to a customer depended, for the benefit of both parties, upon the creation and preservation of records about funds on account, funds dispensed, and funds credited. The business records in question existed for the sole purpose of accomplishing the business objectives that each party understood to be the entire scope of the services to be undertaken by the bank on behalf of a customer.

From the factual premise for the United States v. Miller decision, neither Mr. Miller, his bank, or the Supreme Court, could imagine a future in which a service or a product was designed to profit, not only from the services a customer desired, but from the third party’s exploitation of the information provided by the customer, about which the customer would know nothing and from which the customer could expect nothing. While Mr. Miller lived in an era when bank customers expected banks to profit from customers’ money on account,  modern day Internet entrepreneurs foist a two layered relationship on their customers, one for which they keep accounts and one for which they do not have to account.

If the third party doctrine exempts the bank customer’s confidential data from 4th amendment protections because consent is implied by the customer’s paying a bank to perform the regular services of a banking business, how could that consent extend to a distinct, undisclosed, and secret business of profiting off the collection, manipulation, and sale of otherwise 4th amendment protected personal data entirely outside the scope of the business of banking? The line of court precedent establishing the third party doctrine always relied on     the fact that when a customer surrendered exclusive control over personal information to a third party, the customer knew what business the third party was in.

When a customer purchases a Roomba robot to vacuum her carpet, money is paid for a computerized, self-navigating vacuum cleaner, not for the remote hoarding of a data stream intimately mapping the interior of her apartment. In the software industry, consent is gained by acceptance of the terms of license in the product’s EULA (End User Licensing Agreement). No acceptance of the EULA, no robot software for you. When agreeing to Roomba’s EULA, the customer is conditioned by her experience with other retail purchases to believe that she is buying a robot vacuum cleaner that sucks up carpet dust, not one that draws a map of her house and the fit of her possessions within it while performing its vacuuming duties. Since software and hardware technology companies have started playing this kind of two card Monte with their customers, we are likely on the verge of asking courts to review technology companies’ EULAs as closely as case law.

The Roomba’s EULA reads in part:

“3. Automatic Software Updates.

 The Product Software may cause the Product to automatically communicate with the iRobot’s servers to deliver the functionality described in the Product Guide, to record usage metrics and to collect personal information as described in the iRobot’s Privacy Policy.”

Here is the relevant excerpt from iRobot’s Privacy Policy:

“Information We Collect from Registered Devices

 Some of our Robots are equipped with smart technology which allows the Robots to transmit data wirelessly to the Service. For example, the Robot could collect and transmit information about the Robot’s function and use statistics, such as battery life and health, number of missions, the device identifier, and location mapping.”

Does the skillfully lawyer-crafted ambiguity of the term “location mapping”, added after a serial listing of technical data only a service technician could love, inform the purchasers that Roomba is mapping and transmitting not only its own location in your house, but mapping your entire house? Does such a faux disclosure of actual intentions meet the standards of consent in a relationship with a third party business, such that it defeats their customers’ right to privacy in their homes? The fact that these reporting functions can be turned off by the technically adept consumer demonstrates that they are not at all essential to vacuum functionalityvii.

The foundation of the third party exception rests upon the customer’s surrender of his privacy in a business transaction with a third party only insofar as that surrender is necessitated by the scope of services being rendered. No bank can sneak into your bedroom and search for a bag of cash in your closet, and then provide the location to police authorities upon request because it is in the business of handling your bank accounts. The legitimacy of the third party records exception is predicated on the premise that all personal information provided to, or generated by, a third party are essential artifacts created in the ordinary course of the business service the customer fully understands and consents to. The factual premise for the ruling in State. v. Miller was that Mr. Miller knew what business his bank was in.

How do we craft an exception to the third party records exception, disallowing warrantless police access to all personal domestic data collection not obtained solely to allow a product or service to function? Such an exception would do little to curtail the commercialization of customers’ privacy, if consumers choose to be generous with their consent, but it would do much to prevent the exploitation of such consent by law enforcement. If defense lawyers don’t aggressively challenge corporate collection and law enforcement access to the fruits of the poisonous nosey robots, technology companies will continue to make the “Internet of Things” a water well of collected privacies that never runs dry, brimming with customer surveillance for law enforcement to quench its thirst.

Roomba is but a bottom tier component, deployed to perform a function that creates an opportunity for data collection about its user. In this way, other than its talent for lifting pet hair out of carpets, it is really no different than a commercial website. The entire business model of web commerce is based upon the collection of consumers’ behaviors made in the course of enjoying the appliance, product, or web platform provided them. This collection of consumer decisions transforms raw personal data sets into a business asset that calculates individual and collective customer tendencies to decide in favor of any purchase, or opinion, for which the customer is predisposed or has been conditioned.

The expansion of this technique for website-based surveillance of keyboard input to surveillance of customer voice input is well underway. Personal digital assistants from Google, Amazon, or Apple start vocally interacting with us as soon as they enter the living rooms of families willing to converse with an unassuming little device that is but a happy face painted on a corporate computer server farm.

The home assistant “Thing”, when activated by a word it hears while constantly listening to the ambient sounds and conversations in the homeviii, immediately engages with its remote server for its artificial intelligence software to translate human communications into something computers can work with. Once the remote server has solved the math problem of what it is the human wants, it directs commands back to the box sitting on your end table to comply with the vocal directive to turn on your smart dishwasher, buy a ticket to a movie, or perhaps explain how to patch sheetrock.

Each of these devices is a profitable token deployed among consumers to act as a field research lab for proprietary natural language processing and artificial intelligence engineering. While the digital assistant is getting your pizza delivered, its manufacturer is likely researching how Echo best communicates with people in their own languages, as well as how Echo itself can communicate to other humans as well as people do. The commercial value is not merely in the refinements Amazon can make to its voice recognition and speech simulation software, but in the fact that the more such devices communicate with humans, the better they learn how to use our languages to reason with us. Imagine Kubrick’s HAL on your nightstand, with an equally nefarious hidden agenda.

How could using such a convenient little digital appliance offend constitutional interests? It is a relatively low bar for law enforcement to obtain warrant access to digital home assistant devices or voice activated remote controls in order to alter the active listening mode initiation prompt that waits for a word like “Siri” to initiate an “always on” voice activation mode, similar to hand held voice activated recorders. Law enforcement using Echo or Siri like a Title IIIix surveillance bug is no alarming paradigm shift in surveillance capabilities. Law enforcement agencies have long used court authorized eavesdropping and wiretapping to passively listen to domestic conversations, but police have never employed technology that can actually make conversation with the targets of a criminal investigation. This upgrade in surveillance potential stems not from a surreptitious recording capability, but from the capacity to guide verbal interactions with the suspect being surveilled. When a digital device can make conversation with its owners while under the control of law enforcement, the covert intrusion is more similar to a long term undercover operation taking place in your living room than it is a wiretap.

In the computer industry, companies aspire to create an interlocking product line that spans the consumer’s range of desires, so as to insure that no matter what product is chosen, it is one made by the same company. This is known as creating a “walled garden” of a company’s own consumer goods from which the customer chooses, rather than from all possible choices in the open market. The interactive digital home assistant, having weaponized convenience, can offer purchasing options that are to its manufacturer’s advantage, rather than the customer’s. By simply substituting the words “law enforcement” in place of “manufacturer,” the device’s goal of placing the customer in a walled garden can be re-imagined as a place with higher walls and fewer gardens.

Can one conspire with a digital assistant acting out a police inspired subterfuge? If the customer’s search requests trend toward weapons, extremist groups, or how to make things blow up, do these third party business documents alert police that voice records provide requisite suspicion or predisposition to use the digital home assistant as a “cooperating individual” to verbally encourage a purchase of documents, goods, or travel that would constitute an act in furtherance? What if the “assistant” helps the suspect locate a “gunsmith” undercover agent who the suspect’s Echo, at police direction, tells him will make his new silencer? What about the coming day when the digital assistant’s voice simulation is so sophisticated that the target thinks the “gunsmith” to whom his Echo placed a call is a real human co-conspirator, instead of his own Echo pretending to be one, under the remote control of police?

Long before the future day when voice enabled devices become artificial police undercover impersonators, “Things” recorded voice data poses a present danger if it is easily accessible to police as a “business record”. Unlike Roomba, the functionality that was promised to the digital assistant customer is dependent upon the feedback loop of data being exchanged with an Amazon server off premises, hiding in its favorite cloud. The customer consents to using his voice to enable the product and understands the product is performing as expected by using the customer’s voice as data entry. As with the Roomba, the confrontation with the 4th amendment doesn’t come within the course of performing the service provided, but with the manufacturer’s preservation and ultra-analysis of recorded voice data to fulfill a completely different, undisclosed, corporate ambition.

Are customers adequately informed of, or can they even imagine, the use to which their seemingly private communications with an electronic gadget will be put in corporate research and development? Are they consenting to interact with such devices with concrete knowledge of how the users’ voice records will be commercially exploited far into the future? When clicking agreement on that Google, Apple, Amazon, or Microsoft EULA, is the customer made fully aware of the manufacturer’s objectives for the conversational voice exchanges the customer provides? Could she possibly know the intimate scope and complexity of her own psychological analysis of which artificial intelligence resources are now capable? The applicability of the third party doctrine to this segment of the technology market stand or falls on whether the customer consents to chatting with a device that suggests bargain dress shops while also stalking her.

To demonstrate the degree of disclosure common to the End User Licensing Agreements in this market sector, these are the data retention disclosures in the terms of service for Amazon’s Echo that consumers must agree:

1.1 General. Your messages, communications requests (e.g., “Alexa, call Mom”), and related interactions are “Alexa Interactions,” as described in the Alexa Terms of Use. Amazon processes and retains your Alexa Interactions and related information in the cloud in order to respond to your requests (e.g., “Send a message to Mom”), to provide additional functionality (e.g., speech to text transcription and vice versa), and to improve our services. We also store your messages in the cloud so that they’re available on your Amazon Alexa App and select Alexa Enabled Products.” x

There is no disclosure of the duration of storage or in what form, or any specificity as to what “services” the harvest of human voice communication will be applied to improve, either now or in the future. Do those “services” include mining the conversation’s content for advertising purposes, marketing overtures concerning the subject matters referenced, psychological, or physical profilingxi, or the semantic patternsxii of human request and computed response? Does a naive, blanket acceptance of an ambiguous term of retention and obscure corporate exploitation establish an informed and continuing consent?

How can customers even give informed consent to uses of the customer’s voice data about which they are not informed? A default to generalities in a licensing agreement should not open the data logs of customers’ intimate spoken requests to law enforcement access on demand because they are business records, when the “business” is an undisclosed R&D project of the corporate third party that provides no product or service to the customer and which may not even currently exist.

The corporate digital archives that store either a literal or synthesized compendium of all our conversational exchanges with home assistant devices form a stockpile of raw material, the data capital needed to conduct a world changing experiment with profound surveillance potential. Having obtained your consent to their terms of service, and those of millions of others the world over, the most ambitious prospectors in the voice data mining industry want much more than to build software that can understand the customer’s words. The trend of their innovation suggests that the industry hopes to go beyond perfecting how computers listen to and comply with the requests of humans to perfecting how to make people listen to and comply with computers. When robots can verbally instruct humans, they can run factories and police the streets. The next surveillance business model will extend the two dimensional realms of keyboard and voice input to three dimensional surveillance monitoring of entire communities.

Policing is the ultimate surveillance platform for The Internet of Things. In public space, the private surveillance industry can skip consumer consent and exercise a police function by gaining only municipalities’ consent to surveil the public. Direct observation of the public  streets would allow a combination of digital data from websites, smartphones, digital assistants, and household appliances, with commercially valuable data gathered from citizens’ public conversations, facial features, dress, shopping routines, and patterns of movement. The private surveillance businesses would be gathering consumer information privately and publicly, in a full circle of data collection, all the while being paid for the data collected and paid for collecting the data.

The police services rendered, such as tracking suspicious persons, identifying fugitives, and reporting offenses in progress to human counterparts, would be viewed as mere overhead for a consumer surveillance enterprise freed of its digital boundaries to track, record, and collect the life of a city. Or, just as Internet and computer companies provide free access to services and software without charge, those same companies could offer municipalities free policing in exchange for retaining and monetizing “citizen-data,” just as they have consumer data. The merger and exploitation of private corporate data collection and government data collection would be essential to perform the police function.

Today’s robot cop wannabes already have the mobility, verbal communication skills, both visual and audial surveillance capabilities, as well as technologies of physical identification. All robotic policing needs is a street full of citizens to practice on. Today, that street is in Dubai.

The headline of an article published on the website The Vergexiii, reads “Police in Dubai have recruited a self-driving robo-car that can scan for undesirables.” This article describes a mobile surveillance unit known as the O-R3, with a 360 degree camera. Major General Abdullah Khalifa Al Marri of the Dubai Police Force, is quoted as saying: “We seek to augment operations with the help of technology such as robots. Essentially, we aim for streets to be safe and peaceful even without heavy police patrol.” As a surveillance cherry on top, the O-R3 features an on-board drone to follow individuals to places the robot can’t go. The Dubai police department wants 25% of its police force to be robots by 2030.

The O-R3, in the configuration described in the article, is little more than a set of wheeled eyeballs walking the beat, a sort of Roomba with a badge, much dumber than an Echo. The hint of what is to come is in the article’s reference to “scanning for undesirables”.

Similar to Roomba and devices like Echo, the next generation of the O-R3s will use spatial and facial recognition technology that requires a sustained wireless link to a computer server running 24/7 somewhere in the cop cloud. Like the Echo, the O-R3 will become a mobile extension of a much more sophisticated, complex hierarchy of software and technology than meets the eye. OR-3, in some future iteration, will investigate all of its street level surveillance using the full range of cloud stored commercial and law enforcement profiling data that the technology industry and law enforcement agencies have aggregated over decades of consumer and citizen surveillance. It will behave like an Echo asking only itself the answers to all of its own questions about us.

Like the traditional concept of a third party business relationship, the notion of how much about us is public in a public space will increase as new surveillance technologies become integrated with instant access to the most intimate captured data from one’s past. Just as it is with the Internet, there will be no anonymity in a crowd, nor privacy when alone in public places.

Tomorrow’s police surveillance platform will roll around the streets like a riding lawn mower, making decisions as a human officer’s surrogate, drawing from a data field larger than the combined police experience of all law enforcement officers who walked its beat before it…and it will also know where you bought your watch.

The coupling of police records and private industries’ data greatly enriches this new surveillance collaboration of government and private industry. As the next generation of private police robots steer through the streets, they will tirelessly add to that ocean-deep digital  archive of personal surveillance data with which corporate and government interests can get to know the citizenry well enough to either profit off us or put us in our places.

Once we reach this point of no return, no private police surveillance platform will have to ask a consumer end user for his consent, as have previous consumer surveillance devices. The end user of the surveillance technologies in the streets will not be the individual customer: the third party doctrine will become irrelevant when the consenting customer is the police. Our challenge is to decide whether private industry’s interest in surveilling the public is in the public interest, and whether our social contract with government is to be defined by an End User Licensing Agreement or the Constitution.


i “Roomba maker may share maps of users’ homes with Google, Amazon or Apple” by Alex Hern, The Guardian , 7/25/2017 and for more background, see New York Times “Your Roomba May be Mapping Your Home Collecting Data that could be Shared” by Maggie Astor, 7/25/17.

ii See “iRobot Sells off Military Unit, will Stick to Friendlier Consumer Robots” by Ron Amadeo, Ars Technica, 2/5/2017.

iii See Reuters article correction “Roomba vacuum cleaner maker iRobot betting big on the “smart’ home” by Reuters Staff, July 24, 2017.

iv See, “iRobot says the company never planned to sell Roomba home mapping data” B. Heater, Disrupt SF,

7/28/ 17.

v “…iRobot has sold more than 20 million robots worldwide.” See Information/History.

vi United States v. Miller, 425 U.S. 435 (1976) “All of the documents obtained, including financial statements and deposit slips, contain only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.” Page 425 U.S. 442, excerpt from Justice Powell ‘s opinion.

vii “How to Keep a Roomba Vacuum Cleaner From Collecting Data About Your Home” Consumer Reports 7/25/2017.

viii See “The Privacy Threat From Always-On Microphones Like the Amazon Echo” by Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project January 13, 2017 for a discussion of the broader issue of always on microphones and the 2017 Arkansas murder case where a warrant for Echo recordings was resisted by Amazon before the issue was mooted by the Echo owner’s consent before the case was dismissed.



ix Does Title III even apply to a conversation with a computer? What about conversation between two computers? Title III of The Omnibus Crime Control and Safe Streets Act of 1968 (Wiretap Act) 18 U.S.C. §§ 2510-22, as amended by the Electronic Communications Privacy Act (ECPA), controls court authorization for the monitoring of aural communications. “Aural communications” are those that are heard and understood by the human ear. A question for another day is whether computers have “ears” or just signal receivers that interpret audio signals, and if the latter, do we communicate “aurally” with them at all?

x See entire Amazon EULA at   xi   ”Amazon’s Echo Look Rates Your Outfits and Slurps Up Revealing Data” by Jamie Condliffe, April 27, 2017. See also “Amazon’s Echo Look is a minefield of AI and Privacy concerns” by James Vincent, The Verge, 4/17/17.   xii It is now part of the AI toolbox to analyze emotions in print as well as voice. See “Semantic patterns for

sentiment analysis of Twitter” Open Research (authors) proposing a method for assessing sentiments expressed via latent semantic relations, patterns and dependencies among words in tweets.

A Beginner’s Guide to Surveillance, Security, and the Privilege

By Sam Guiberson & Jeremy Guillula

As anyone who can spell “Internet” must know by now, when we use digital devices for work or play, we are subject to the compromise of our communications and our stored information by way of government, corporate, or criminal interception and surveillance. With our fingers on the keys only a few inches from our screens, the relationship between ourselves and our computers seems as intimate as lovers sitting side by side on a park bench. Intellectually, we know that the Internet is the nervous system of a wired world, where what we ping, pings us.  What we don’t fully absorb is how those wires wind together to form a sieve through which our digital self-expression is emptied into the waiting hands of strangers, eager to exploit it to ends we can barely imagine.

To participate in the commerce of the Internet, we must become its currency, exchanging our privacy for the barter of goods, gossip, news, and entertainment. The subtle compromise of our privacy makes it easy to forgive the invasion. The relentless cataloging of our clicks on every website, of every document we open, of our text and voice communications, every purchase, and each news item we peruse, is conjoined to similar life logs of all other users in a mosaic of our emotional, intellectual, and commercial experiences. The sum total of all our past choices and comments is the predictable trajectory of all our futures. Possessing predictive data on our billions of futures has unparalleled commercial and political value.

The scope of this commercial surveillance far exceeds that of any past totalitarian governments, but pales in comparison to the surveillance reach of our own. The government of the United States has declared eminent domain over all our secrets. It alone combines web based surveillance with the global interception of personal, commercial, and governmental communications, international and domestic signals traffic, and by either legal or extralegal means, the proprietary data traffic of private industry and technology companies.  Other governments are now striving to follow our example.

Even though we have a general, if uncomfortable, awareness of the promiscuous exploitations of our every digital transaction, we tend to behave more like customers than lawyers.  In the trivial remarks we post, in the emails, texts, and Facebook messages we send, the “likes” we click and the products we buy, we believe we have done nothing worthy of the government’s gaze. We have nothing to hide. Nothing we do on the Internet or with our digital devices violates the law, and therefore, we are not targets of surveillance.

Yet every one of us would tell even our most certifiably innocent client not to make a statement or allow a search without a warrant based upon the client’s confidence that he or she has ‘done nothing wrong’. We give this advice because our training and experience has taught us that the true motives of a criminal investigation are not initially made apparent to the suspect and that the stated superficial objective may be quite different than the suspicion or evidence left undisclosed. So it is with digital surveillance. The essence of mass surveillance is that no target is less a target than any other.

The gargantuan scale of the surveillance governments now undertake advises us that strategic and predictive intelligence is more valuable than criminal evidence. Presuming falsely that mass surveillance is just a world wide web of stoplight cameras built to catch those who run red lights, we operate our digital lives on the assumption that the sole objective of mass surveillance is to document evidence of culpability, when the true objective of mass surveillance is to control by the exploitation of secrets.[1] There may well be a ghost in the digital surveillance machine, but unfortunately for us and for our clients, it is Machiavelli’s ghost.

We cannot neglect our duty to protect the attorney-client privilege merely because our professional communications are immersed in a multiplex of digital surveillance technologies. Our decisions about our personal digital privacy need to be segregated from those we make when we bear responsibility for our clients’ privacy, security, and legal defense. No personal decision an attorney makes is substitute for a disciplined, well informed assessment of the risks posed to a client’s privileged communications. As individuals, we can choose to negotiate away our privacy. As lawyers, we must defend a client’s privilege absolutely.

Intelligence Standards and Standards of Ethics

          In a surveillance state, is there a tension between the State and the attorney-client privilege? Is there even such a thing as client confidences and effective assistance of counsel if the State, at its discretion, may harvest a rich portfolio of attorney client communications, attorney web searches, and call data records of a law office by means of dragnet interception? In our emergent surveillance state, there is reason to believe that half measures of compartmentalization exercised subsequent to mass collection constitute our government’s best efforts to recognize the attorney-client privilege.

In 2014, on the heels of the Snowden disclosures of the massive scope of NSA surveillance, then American Bar Association President, James Silkenat, wrote a letter of concern to General Keith Alexander, then Director of the National Security Agency, regarding the reported interception of an American law firm’s communications with its foreign client by Australian intelligence with the NSA’s collaboration. The intercepted communications from that surveillance were then offered to the NSA under a long-standing reciprocal intelligence sharing agreement among countries known as the ‘Five Eyes’.[2] The compromised privileged communications related to a trade dispute involving clove cigarettes and shrimp pricing, a matter in arbitration between Indonesia and the Australia at the time.[3]

General Alexander responded with due deference to NSA’s legal obligation to prevent the unrestricted use of attorney-client communications occurring post-indictment after the right to counsel had attached. He cited the minimization procedures set out in Executive Order 12333[4] and section 702 of the Foreign Intelligence Surveillance act, the statutory cornerstone for court authorization of mass surveillance of international communications, including participating domestic US persons. The NSA director also described a construct of procedural safeguards that would meticulously compartmentalize intercepted attorney-client communications of which the Agency has notice by means of court records.

The Porous Quarantine of Intercepted Privileged Communications

Even assuming that NSA, much less its Five Eyes intelligence sharing partners who are under no such restraints as to U.S. privileged communications, were to religiously observe such statutory and administrative restraint by limiting collection or distribution of communications between lawyers, their agents, and persons known to be under indictment in the United States, while preserving “foreign intelligence information contained therein[5],” the exceptions may well swallow the rule.

At the pre-indictment stage of criminal representation, the identities of counsel not of record and all members of the defense team are opaque to a federal database of only indicted defendants.  No provisions are in place for identifying or segregating communications with counsel pre-indictment, or for defense team members who are not named counsel in court records upon which the NSA will rely. Quarantine of only attorney-client communications that occur post-indictment conflates the right to an attorney with the right to the attorney-client privilege. NSA sequestration or minimization of only an indicted defendant’s communications with counsel provides only a hollow deference to the much broader actual scope of the privilege.

The additional carve-out of preserving attorney client communications that contain ‘intelligence information’, or when collectors are given other advice “tailored to the particular facts and circumstances in which sensitive intelligence activities have been or are to be undertaken,”[6] begs the question of how intercepted privileged communications can be exploited as intelligence outside the walls of the courthouse in a criminal prosecution. Are they available to blackmail an American or foreign citizen to work as a spy or as an informer? Can they be used to investigate criminal enterprises or drug trafficking conspiracies so long as privileged source intelligence is disguised so it is not identifiable in a criminal prosecution? Are they available to leverage favors from politicians, executives, or professionals when a highly confidential and sensitive government request is made? There are many ways in which inventive minds can exploit the interception of privileged communications to the disadvantage of a client without allowing the sun to shine upon that surveillance in a court of law.

If our intelligence and law enforcement agencies’ situational and opportunistic calibration of the term ‘sensitive intelligence activities’ can include a defense counsel’s representation of a Guantanamo detainee on the one hand, and a lawyer handling Australian shrimp import negotiations on the other, there must be a very flexible standard for what legal representation may be postulated as having intelligence value.

The Guidance of Professional Ethics

In August of 2013, the ABA House of Delegates issued a new policy statement[7] condemning “unauthorized, illegal governmental, organizational, and individual intrusions into the computer systems and networks utilized by lawyers and law firms” and opposing “governmental measures that would have the effect of eroding the attorney client privilege, the work product doctrine, (and) the confidential lawyer client relationship…” This statement also urged compliance with the ABA’s Model Rules of Conduct, updated in 2012, to include changes to Rule 1.6 “Confidentiality of Information”, stating that “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client”.[8] The use of the imperative obligates attorneys to maintain such expertise as is necessary to make efforts that are “reasonable” to insure there is no disclosure of, or access to, information relating to representation. This standard of practice compels reasonable measures be taken to defeat covert mass surveillance and cyber attack which, by definition, are not authorized by the client. And yet, within the single word “reasonable”, there is a mansion with many rooms of wiggle.[9]

In a 2012 NACDL Ethics Advisory Committee Opinion issued in response to restrictions on attorney client communications arising from the Guantanamo Tribunals, the Committee decided that “without the client’s informed consent, a lawyer cannot communicate with his or her client, or record and preserve communications with the client, or create and preserve other written work product, in a manner that allows others to have access to the communications.”[10] The implication of this opinion in a mass surveillance context would seem to be that absent the client’s consent, a criminal defense lawyer cannot ethically represent a client when he or she cannot effectively prevent government or private actor surveillance access to privileged communications and protect the cybersecurity of work product. In another related ethics opinion, the Advisory Committee stated, “A criminal defense attorney has an ethical and constitutional duty to take affirmative action to protect the confidentiality of attorney client communications from government surveillance”.[11] Or, put another way, when it comes to the defense of the privilege against government surveillance or cyber attack in criminal practice, to quote Yoda, “Do, or do not. There is no try.”

                               It Takes a Village to Protect the Privilege

Governments can vacuum up our privileged communications, but the privilege cannot be defended in one. Lawyer and client must agree on what standard of communications security and data protection will be appropriate to the risk profile of the case, and then get buy-in from other members of the defense team. One new dimension of law practice, in this era of so many ‘eyes’ and  ‘ears’, is counseling clients and negotiating agreements among co-counsel, and even with co-defendants’ counsel, as to what level of operational security can be successfully applied, when vigorous precautions are required.  Total unanimity of action in the defense camp is essential to protecting privileged communications and work product.

Today, the expense of employing most of the capable security measures described in this article is not a deterrent, but the challenge of applying them may well be. As surely as there is a learning curve in applying technology-driven countermeasures, there is a learning curve in successfully adapting to privilege-protective practices that were utterly unfamiliar to lawyers before the full scope of mass surveillance was well understood. All clients in all cases do not require the same technical measures or the same degree of vigorous protection. In assessing what particular facts enhance the prospect of defense team communications and computer files being targeted, it is important to understand that there is more than one dragnet.[12] Trying to anticipate every possible threat is like trying to hold on to your wallet at a pickpockets’ convention.

Surveillance Risk Management

A lawyer must speculate on who are the more probable aggressors, because surveillance technologies, cyber attacks, and intrigues[13] that can compromise a client are not so sophisticated that only the U.S. government can deploy them. Our privileged communications are also subject to attack from other governments’ hackers, crime syndicates, drug cartels, a client’s business or political adversaries, international and domestic crime syndicates, or contract hackers seeking business intelligence or blackmail in discovery files, or just to turn a profit with inside information about a celebrity defendant or a highly publicized case. Evaluating whether any of these risk factors are in play, quite aside from apprehensions of NSA or law enforcement surveillance, will shape the defense team’s response in mounting its own security practices. The old Watergate adage, ‘follow the money’ is a good place to start; who benefits by disclosure or exploitation of the accessed information? It is clearly not the world of our childhoods when the practice of law draws closer and closer to the practice of espionage.

The types of cases in which it is reasonable, if not essential, to undertake countermeasures to defend privileged communications are those involving investigative activities or contact with individuals outside the United States. It is also worth evaluating the international political profile of the case. Does an acquittal or a conviction impact the reputation or credibility of any government, a political party, or business interests intrinsic to the power structure of a foreign country? Does the client have, or could it be believed that the client had, some information that would compromise such financial, political, or criminal interests?  Is there a criminal organization, or a domestic or foreign political organization, or a major foreign or domestic corporation, which is likely to be implicated or communicated within the course of the defense?  Will a successful defense or prosecution affect the value of any public company that has competitors or takeover raiders snapping at its heels? Cases that involve foreign nationals with organized crime ties of even the most modest variety may draw the interest of their home countries as well as their homies.

Another category of high risk are those offenses in which some element of the United States government perceives itself as the victim, or perceives its foreign allies to be victims, or in which the unsuccessful prosecution of the case would affect national interests or political reputations. A related class of potentially high risk cases are those in which the contents of the government discovery, or of the defense investigation, would have political, commercial, or intelligence value, or when its exposure would affect the reputations of powerful government, corporate, or international figures or families.

The intelligence community’s license to share criminal intelligence with federal law enforcement agencies, and the inevitable trickle down to state agencies through joint task forces[14] and fusion centers[15],  broadens the implications and the consequences of privileged communications surveillance in routine criminal practice. Police priorities, political priorities, and publicity priorities all skew the incentives toward using surveillance-based criminal intelligence far from the realm of espionage and terrorism cases. The use of parallel construction[16] to cloak any linkage of the actionable intelligence to mass surveillance sources   gives cover and encouragement to local law enforcement by assuring that any well-concealed violation of privileged communications is never put before a court.

Just as a national security agenda may ‘trickle down’ to investigations at a local police level through Joint Task Forces and Fusion Centers, so too can a local security agenda ‘trickle up’ to gain sanction for the robust use of surveillance directed at local threat priorities identified by local police. Local police suspicions and resentment of local leadership in communities of color, of social justice, peace, and environmental activists, even animal rights activists, have historically experienced intense surveillance and infiltration from local and federal law enforcement using the full array of technologies available at the time. In the quid pro quo relationships existing between federal and local law enforcement agencies, the surveillance tools designed to defend the national security are often deployed in defense of the status quo. In those cases where the client is an individual who police associate with a dissident local group espousing radical politics, social justice, racial, anti-war or anti-capitalist sentiments, there is substantial risk of physical, digital, and communications surveillance, on or off the ledger of accountability to elected officials.

There are also dire consequences for a defendant when a confidence meant for his attorney finds its way to law enforcement agencies that act upon that tip from an undisclosed surveillance interception. Persons not under suspicion may suddenly find themselves targets and logically conclude that the client has informed on them to law enforcement, rather than having only informed his attorney. The exploitation of intercepted privileged communications in organized crime cases, drug conspiracy cases, gang related cases, and terror prosecutions can all lead to a snitch’s fate for a defendant who breached no trust with his fellow conspirators, but trusted his lawyer. Equally sobering is the prospect that these unknown third parties with anger management issues may hold the defendant’s attorney liable for their compromise.

When one or more of these factors is integral to a case, there exists a credible risk of persistent, aggressive surveillance from one or more of these many actors. It is always lawyerly to admit that our best professional insight may be inadequate as to what factors in a case focus clandestine surveillance upon the defense.  What we guess, what we presume, and even what we know about our case facts, may fall short of what those with the power to surveil or to hack us consider valuable to their own ends. Our footprints in the digital snow, as well as our clients’, may lead to consequences we simply can’t anticipate. Our default practice should be to leave as few footprints as possible.


Protecting your communications, your documents, and your Internet usage from bulk surveillance and targeted attacks requires a broad spectrum of security-enhancing tools.

It is critical to remember that security is a process, not a purchase. No tool is going to give you absolute protection from surveillance in all circumstances. Using encryption software will generally make it harder for others to read your communications or rummage through your computer’s files. Attacks on your digital security will always seek out the weakest element of your security practices. The tools and practices recommended below have been chosen to maximize the security benefit they provide, while minimizing the effort required to use them.

Using Strong Passwords and a Password Manager

The first task in securing your digital world is to start using strong passwords. Almost every online service, not to mention every form of encryption, relies on some sort of password—which makes your password the first thing an attacker will try to break. And attackers have an advantage: computers are now fast enough to quickly guess passwords shorter than ten or so characters, even totally random ones like “nQ\m=8*x” or “!s7e&nUY.”

So how do you select a strong password? The most straightforward method is Arnold Reinhold’s “Diceware” method.[17] Diceware involves rolling actual physical dice to randomly choose several words from a word list; together, these words form what is called a passphrase. The benefit of this method is that random words are a lot easier to remember than random characters, and you need fewer of them: a six-word passphrase can be stronger than a 12-character password, because there are a lot more words to choose from than there are characters (even if you include upper and lowercase, numbers, and symbols) so it’s harder for an attacker to try all the possible combinations of words.

Of course, it’s important to never reuse a password on different services, because if an attacker gets hold of one password, she will often try using that password on your other accounts. If you reused the same password several times, the attacker will be able to access each account where it was reused. That means a given password may be only as secure as the least secure service where it’s been used.

That’s all well and good, but how are you supposed to remember dozens of different passwords? Fortunately, you don’t have to. There are software tools—called password managers (or password safes)—that can protect all of your passwords with a single strong master pass phrase, so you only have to remember one thing. This makes it practical to avoid using the same password in multiple contexts. In fact, if you use a password manager, you no longer need to even know the passwords for your different accounts; the password manager handles the entire process of creating and remembering them for you.

The Electronic Frontier Foundation (EFF)[18] recommends KeePassX[19], which is a free and open source password manager.  KeePassX works with files called password databases, which are exactly what they sound like: files that store a database of all your passwords. These databases are encrypted when they’re stored on your computer’s hard disk, so if your computer is off and someone steals it they won’t be able to read your passwords.

Note that KeePassX doesn’t have a built-in sync feature—it won’t automatically sync your password database between different devices. So what if you need your passwords on more than one computer? As long as you use a strong master passphrase, it should be relatively safe to sync KeePassX’s password-database file to other devices using any cloud-based service (Dropbox, Google Drive, etc.). That’s because the password-database file is encrypted using your master passphrase, so even someone who gets access to your cloud sync service won’t be able to read your passwords. (It’s worth re-emphasizing the importance of using a six-or-more word passphrase if you’re going to sync your password-database to the cloud.) And if you need your passwords on your smartphone, there are also KeePass clients for Android and iOS.

Encrypting your Devices

 Now that you know how to pick a strong password and store all your passwords securely, the next step to maintaining attorney-client privilege is to ensure that your files and documents are safe at rest—i.e. when they’re stored on your computer or smartphone, so that a lost or stolen device isn’t an open-book for a would-be thief.

It’s safest and easiest to encrypt all of your data, not just a few folders. Most computers and smartphones offer complete, full-disk encryption as an option.

If you use a Mac, chances are your computer is already encrypted: versions of OS X 10.10 and later (“Yosemite”, “El Capitan”, and “Sierra”) all enable disk-encryption by default using a tool called “FileVault”.[20]

If you use a PC, Windows calls its encryption system “BitLocker.” BitLocker is built in to Windows 7, 8, and 10, but only the non-Home editions (e.g. Windows Professional or Enterprise). It’s not necessarily enabled by default, so you may have to enable it.[21] Some PCs don’t support BitLocker—in that case, you can try using a free, open-source tool called DiskCryptor.[22]

In addition to your computers, your smartphones (which are basically tiny super-portable computers, after all) should also be encrypted. If you have an iPhone 3GS or later, an iPod touch 3rd generation or later, or any iPad, you can enable encryption. In fact, most modern Apple devices encrypt their contents by default, with various levels of protection.[23] You can also encrypt Android smartphones running Android Gingerbread (2.3) or later. Some smartphones running Android Lollipop (5.0 or higher) will have encryption enabled by default.[24]

Whatever your device calls it, encryption is only as good as your password. If your attacker has your device, they have all the time in the world to try out new passwords. Forensic software can try millions of passwords a second. That means that a four number pin is unlikely to protect your data for very long at all, and even a long password may merely slow down your attacker. Thus, you should use a nice, strong, six-plus word diceware passphrase when encrypting your computer—and at least a six-digit PIN code for your smartphone.[25]

It’s also important to note that even if your device is encrypted, an attacker may be able to get around that encryption and access your files via any backups you regularly make. If your backups are to the cloud, the connection between your device and the cloud will almost certainly be encrypted, so you don’t have to worry about information being leaked as it’s being sent and received. However, it’s possible that the backup itself may not be stored in an encrypted manner, so anyone with access to your cloud backup account could access your files (or a government could pressure the service to turn them over). To avoid this weakness, make sure to choose a cloud backup provider that encrypts the data before it leaves your computer (sometimes known as a zero-knowledge system, since the provider has “zero” knowledge about your files).

Alternatively, if you back up to a local device (like an external hard-drive), just make sure that device is also encrypted.[26]

Finally, note that encrypting an entire disk for the first time may make your device (be it your smartphone or computer) slower than usual for several hours, so we recommend starting this process before going to sleep, or letting it run over the weekend. Once the initial encryption process is complete, however, you shouldn’t notice much of a performance difference for most modern devices.

Browsing the Web Securely (and Anonymously)

 When it comes to browsing the web, there are three major ways modern technology leaks information to attackers or government agencies.

The first privacy leak comes from the fact that not all communications between your computer and the websites you visit are encrypted. In other words, when you tell your browser to fetch a webpage for a given website, that request, and the page the website sends back, are not necessarily encrypted. This means anyone who can intercept the traffic between your computer and the website (including government agencies, but also anyone nearby if you’re using an open wifi connection) can see what you’re reading, as well as any information you might send back.

Of course, many websites do use encrypted connections—your bank, or a web-based email provider, for example, almost certainly use encryption. But how can you tell? Look for an “s” after the “http” in your browser’s URL bar. If it says “http://”, it’s not encrypted. But if it says “https://”, the connection is encrypted.

Unfortunately, there’s not much you can do if the connection isn’t encrypted; websites have to proactively offer encryption, and you can’t force a website to upgrade to an encrypted connection if the website doesn’t support it. Sometimes, however, a website will support encrypted connections, but not use them by default. To deal with that case, you can install one of EFF’s browser add-ons, HTTPS Everywhere. HTTPS Everywhere is available for Firefox and Chrome browsers, and will automatically upgrade your connection to a secure one on any website that supports it.[27]

All the encryption in the world won’t help with the second privacy leak, which is third-party tracking. When you view a webpage, that page will often be made up of content from many different sources.  For example, even though only one address will show up in your browser’s URL bar, a news webpage might load the actual article from the news company, ads from an ad company, and the comments section from a different company they have a contract with to provide that service. If you visit lots of different websites, and those different websites all use the same ad provider, then that ad provider can track you as you browse the web—often without your knowledge.

To block this non-consensual third-party tracking, EFF has another browser add-on for Firefox and Chrome, called Privacy Badger.[28] Privacy Badger stops advertisers and other third-party trackers from secretly tracking where you go and what pages you look at on the web.  If an advertiser seems to be tracking you across multiple websites without your permission, Privacy Badger automatically blocks that advertiser from loading any more content in your browser.  To the advertiser, it’s like you suddenly disappeared.

However, neither encryption nor blocking third-party tracking can prevent the final privacy leak, which is the fact that when you visit a website, the website itself knows you visited and can track your subsequent visits. Additionally, anyone who can intercept your traffic will be able to tell when you visit that website and for how long, because while what you send or receive may be encrypted, the identity of the website you’re visiting is never encrypted. Once again, that means that aspects of your browsing activity are susceptible to bulk surveillance—as well as anyone who can pressure your Internet service provider into watching your traffic.


To plug this privacy hole, you can use the Tor Browser.[29] Tor Browser works just like other web browsers, except that it sends your communications through a network of volunteer-run computer relays, making it harder for people who are monitoring you to know exactly what you’re doing online, and harder for people monitoring the sites you use to know where you’re connecting from. Keep in mind when using Tor Browser that only activities you do inside of Tor Browser itself will be anonymized. Having Tor Browser installed on your computer does not make things you do on the same computer using other software (such as your regular web browser) anonymous. And of course, logging in to a site like Facebook or Google via Tor Browser will enable those services to track you anew for as long as you keep Tor Browser open.

Communicating Securely

 Communicating is probably the most difficult task to accomplish securely, since you have to coordinate with whomever it is you’re communicating with. Fortunately, there are some software tools out there that make the process a little less painful.

Let’s start with text messages and instant messaging apps. Generally speaking, neither text messages nor instant messages are encrypted—which means anyone who can see the messages as they travel between your smartphone and your client’s smartphone can read them—particularly government agencies that perform bulk surveillance. Some instant messaging apps—Google Hangouts, for example, or Facebook Messenger—do encrypt the messages in transit, but they have to pass through a central server, where they are temporarily decrypted (and often recorded). As a result, anyone who can hack your account (or pressure the company into turning over data) can read your past messages. Very few instant messaging services actually provide what’s known as end-to-end encryption—named thus because the messages are encrypted at one end of the communications channel and aren’t decrypted until they reach the other end. Only end-to-end encryption ensures that only you and your client can read your messages.

One of the few choices out there for end-to-end messaging is an app called Signal, available for Android and iOS.[30] Signal not only encrypts your text messages (to other people using Signal on their smartphones), it also allows you to make encrypted voice calls.

Unfortunately, email encryption is a little more difficult. By default, email is not encrypted when you transmit it over the Internet—it’s like a postcard, readable by anyone who handles it. Depending on which email provider you use, parts of the delivery channel may be encrypted. For example, most web-based email providers (Gmail, Outlook, etc.), encrypt the connection between your computer and their server. But once your email leaves their servers, it may or may not be encrypted any longer.


To get around this, you need a system that encrypts your email—essentially an encryption “envelope” you can drop your message into. The most common system is called “PGP”. It takes quite a bit of work to set up, and you have to use desktop or app-based email software to actually read your email, but the results—totally secure, seamless email encryption—are well worth it.[31]

Alternatively, if PGP proves too daunting, you can fall back on a more ad-hoc system to communicate securely over email. For example, you could agree on a specific, strong, shared passphrase ahead of time with your client.[32] Then, to send a message to your client, you can write your message in a text (or Word) document (instead of in the body of an email), encrypt the document via a program like 7-Zip for Windows[33] or Keka for Mac OS X[34] (using the passphrase you agreed on ahead of time), and then send the encrypted document as an attachment to an email. Your client then simply has to download the attachment, and extract the document (using the shared passphrase you agreed on ahead of time).

It’s important to note that a system like this has some down sides. For example, PGP allows you to verify the identity of whoever sent you an email, but in this system, anyone who discovers the shared password could impersonate someone else and send an encrypted message. Additionally, 7-Zip’s encryption code hasn’t necessarily been vetted in as much detail as the code in tools like PGP designed specifically for secure communication. With that said, while such a system might not be ‘NSA-proof’, it’s probably sufficient to keep a purely passive adversary from reading your conversations.


As it is with technologies, so it is with surveillance. The only constant is constant change. To maintain the security of attorney client communications and defense work product, criminal defense lawyers must keep alert for news of evolving surveillance threats and new privacy countermeasures. The relative safety of software and computing devices is constantly shifting as new flaws are discovered and old bugs are fixed. Companies may compete with each other to provide you with better security, or they may all be under pressure from governments to weaken that security. It’s also important to note that no software or hardware is entirely secure. Software companies who are honest about the limitations of their product will give you reliable information about whether their application is appropriate for you.

Don’t trust blanket statements that say that the code is ‘military-grade’ or ‘NSA-proof ‘; these mean nothing and give a strong warning that the creators are overconfident or unwilling to consider the possible failings in their product. Because attackers are always trying to discover new ways to break the security of tools, software and hardware often needs to be updated to fix new vulnerabilities. It can be a serious problem if the creators of a tool are unwilling to do this, either because they fear bad publicity, or because they have not built the infrastructure to fix problems.

You can’t predict the future, but a good indicator of how software toolmakers will behave in the future is their past activity. If the tool’s website lists previous issues and links to regular updates and information—like specifically how long it has been since the software was last updated—you can be more confident that they will continue to provide this service in the future.

When you buy a new device or a new operating system, keep current with its software updates. Updates will often fix security problems in older code that attacks can exploit. Older phones and operating systems are no longer supported, even for security updates.[35] What technology you use or buy today will become obsolete, and so will today’s best advice about what software protects you and what surveillance technology has evolved to defeat them.

In the coming years, the last refuge of privacy and security in private encryption will come under attack. Law enforcement sentiments are rising in opposition to it as political candidates speak of a ‘surge’[36] in intelligence gathering and others encourage defeating public encryption with back doors, or by compelling duplicate plain text copies for every encrypted digital communication. England’s former Prime Minister, David Cameron, once asked, “Are we going to allow a means of communications which it simply isn’t possible (for governments) to read? My answer to that question is: No, we must not.”[37] In a digital world bursting at its seams with hyper-invasive, aggressive surveillance, constitutional assurances of due process, effective assistance of counsel, and the attorney client privilege will become hollow artifacts of a past American history unless criminal defense lawyers answer Cameron’s question with “Yes, we must.”



[1]          . June 2, 2016 Bloomberg Law article by Gabe Friedman, quoting Edward Snowden; “Government surveillance is about power. These programs were never truly about terrorism, at least not solely. They were about power.”

[2]       . Five Eyes is a nickname given the five signees of a post-WWII treaty of joint cooperation in signals intelligence. Australia, Canada, New Zealand, the United Kingdom, and the United States share their surveillance output with each other, including the surveillance of each others’ citizens.

[3]       . Bloomberg Law article cited above, page 2, also Electronic Frontier Foundation, Feb 22, 2014 “Legal Community Disturbed About Recent Allegations of Spying on Privileged Communications” by Dia Kayyali.

[4]          . Executive Order 12333 was issued by President Reagan in 1981 and amended by President Bush in 2008 with EO13355. The NSA considers these orders as executive authorization for broad Agency discretion in the implementation of the massive scope of its surveillance activities worldwide.

[5]       . Section 4 of NSA’s Section 702 minimization procedures, cited by Director Alexander in his 10/03/14 letter

[6]       . Alexander letter, see paragraph 9.

[7]       .  For more depth & context regarding these rule changes, see ABA Journal web article posted Sept 1st, 2014, by David Hudson, “NSA surveillance policies raise questions about the viability of the attorney-client privilege.”

[8]       . ABA Rule 1.6 Confidentiality of Information, paragraph (c)

[9]       . Per the Model Rule 1.6’s Comments at (18): “Factors to be considered in determining the reasonableness of the lawyer’s efforts include, but are not limited to, the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients. A client may require the lawyer to implement special security measures not required by this Rule or may give informed consent to forgo security measures that would otherwise be required by this Rule.”

[10]      . Opinion 12-01 (February 2012), page 2, finding in sub-paragraph 1. Approved by the NACDL Board of Directors, February 19, 2012.

[11]      . Quoting Digest of NACDL Ethics Advisory Committee Opinion 02-01 (November, 2002)

[12]      . For an interesting read about the quandaries of self-protection from Internet surveillance, see Dragnet Nation, by Julia Angwin.

[13]      . Aside from technological methods, the “social engineering” deceits of impersonation, of false representation of ties to defense personnel, the infiltration of the defense team by private informants who befriend, entice, and emotionally or financially compromise defense staff, are separate risks no technology will protect against.

[14]      . A Joint Task Force is a multi-jurisdictional operational intelligence gathering and investigative partnership drawing personnel from many federal and state law enforcement agencies that is charged solely with the investigation of one particular criminal activity or organization, such as terrorism, organized crime, drug cartels, or gangs.

[15]      . Fusion Centers administer and promote information sharing between the CIA, FBI, the Department of Justice, the U.S. military, the private sector, and state and local law enforcement to provide investigative data for intelligence analysis.

[16]      . Parallel construction is a strategy of deceptive omission or of false representation of facts used by law enforcement to conceal the true source of information used in a criminal investigation.

[17]      . More information on diceware is available at

[18]      . The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. As part of its mission to promote privacy and security online, EFF has developed a website called Surveillance Self-Defense (, which includes detailed guides and how-tos on defending yourself from surveillance by using secure technology and developing careful practices. Much of the advice about software choices and secure computing was copied or adapted from the Surveillance Self-Defense guide, which is published under a Creative Commons Attribution license (i.e. is free for copying and sharing without prior permission).

[19]      . A guide to using KeePassX is available at

[20]      . To check and see if your system is encrypted, and to turn encryption on if it’s off, you can follow the instructions at When your computer asks how you want to store your recovery key, choose the option that does not use your iCloud account, and then make sure to keep a physical (i.e. written-down) copy of your recovery key in a safe place. If you forget your password, you’ll need it in order to decrypt your computer.

[21]      . For Windows 7 instructions, see For Windows 8.1 instructions, see,2-723-4.html. For Windows 10 instructions, see In all cases, when you’re given the option to save your recovery key, we recommend printing it out and then keeping a copy in a safe place.  If you forget your password (or change your system’s hardware), you’ll need it in order to decrypt your computer.

[22]      . For instructions on using DiskCryptor, see

[23]      . To check if your device is encrypted, follow the instructions at

[24]      . To find out if your device is encrypted, and to encrypt it if it’s not, you can follow the instructions at

[25]      . See footnote 17 for more information on choosing a strong passphrase.

[26]      . Either way, make sure you’re backing up your data!

[27]      . You can download HTTPS Everywhere from the Chrome Store, Mozilla Add-Ons website, or

[28]      . Privacy Badger can also be acquired from the Chrome Store, Mozilla Add-Ons website, or

[29]      . A guide to using Tor Browser for Windows is available at A guide to using Tor Browser for Mac OS X is available at

[30]      . Instructions for using Signal on iOS are available at, and on Android at

[31]      . An introduction to PGP is available at A guide to using PGP on Mac OS X is available, and a guide for Windows is at

[32]      . See footnote Error! Bookmark not defined.11 for more information on choosing a strong passphrase.

[33]      . 7-Zip is free, open source software, available from http://

[34]      . Keka is free, open source software, available from

[35]      . In particular, Microsoft has made it clear that Windows XP and earlier Windows versions will not receive fixes for even severe security problems. If you use XP, you cannot expect it to be secure from attackers. The same is true for OS X before 10.7.5 or “Lion.”

[36]      . Tue Jun 14, 2016 Reuters article by Dustin Volz, “Clinton calls for U.S. ‘intelligence surge’ in wake of Orlando attack”

[37]      . The Guardian, Jan. 12, 2015 “David Cameron pledges anti-terror law for Internet after Paris attacks” by Nicholas Watt, Rowena Mason and Ian Traynor.