Assessing the Federal Trade Commission’s Privacy Assessments

Why read this article?

This short article points out the big differences between an “audit,” and an “assessment,” the latter of which is the tool used by the Federal Trade Commission to oversee companies that are subject to consent decrees. “Assessment” is a term of art in accounting wherein a client defines the basis for the evaluation, and an accounting firm certifies compliance with the client-defined standard. An audit, on the other hand, is an evaluation against a defined, externally developed standard, such as an International Organization for Standardization (ISO) standard’s requirements.

This article offers five approaches to improve the assessment process:

  • Require Systems Compliance Tests
  • Use Audit-Like Standards
  • Interview Stakeholders outside the Company
  • Require Disclosure of System Changes
  • Make Assessments More Public

Cite as: Chris Jay Hoofnagle, Assessing the Federal Trade Commission’s Privacy Assessments, 14 IEEE Security and Privacy 58–64 (2016).[Published version] [WorldCat] [Author Preprint]

Consumer protection regulators worldwide share basic problems: the companies that regulators police are so powerful and rich that fines do not matter. Consider the French with their €150,000 fine against Google in 2014. Efficacious fines against dominant platforms would have to rise to nine-figure levels to cause change, but consumer protection agencies generally lack the authority and political will to levy such fines. As a result, consumer protection officials ensure compliance by monitoring defendant companies. However, even this is a challenge. Although consumer protection agencies such as the US Federal Trade Commission (FTC) have decades of experience in evaluating misleading advertising, information security and privacy oversight challenges differ from advertising matters. Because information security and privacy issues are difficult to observe and, even if detected, difficult to understand, the FTC and other enforcement agencies rely on outside “assessments” by accounting and security consultants.  These assessments evaluate the veracity of defendant company managers’ claims about privacy and security protection of consumer information. Accounting and security firms now have a lucrative and growing business in performing assessments required by the FTC and state attorneys general. In a real sense, consumer privacy worldwide depends on these assessments, as international regulators rely on the FTC’s oversight of companies serving consumers in other countries. Unfortunately, assessments are misunderstood by many in the policy realm, who mistakenly believe them to be as rigorous as a formal audit. The lack of knowledge of the differences between assessments and audits allows the FTC and respondent companies to tout assessments as an effective tool to improve practices. In this article, I discuss efforts to oversee companies’ privacy and security programs through the lens of two assessment reports on TRENDnet and Google and offer five suggestions to increase accountability in the assessment process.

title={Assessing the Federal Trade Commission’s Privacy Assessments},
author={Hoofnagle, Chris Jay},
journal={IEEE Security \& Privacy},

2020-05-30T00:02:36+00:00October 15th, 2019|

Comments on the CCPA

I filed the following comments today on the CCPA to the CA AG.

March 8, 2019

VIA Email

California Department of Justice
ATTN: Privacy Regulations Coordinator
300 S. Spring St.
Los Angeles, CA 90013

Re: Comments on Assembly Bill 375, the California Consumer Privacy Act of 2018

Dear Attorney General Becerra,

I helped conceive of the high-level policy goals of the privacy initiative that was withdrawn from the ballot with passage of AB 375. Here I provide comment to give context and explain the high-level policy goals of the initiative, in hopes that it helps your office in contemplating regulations for the CCPA.

Strong policy support for the initiative

As you interpret the CCPA, please bear in mind that the initiative would have passed because Americans care about privacy. In multiple surveys, Americans have indicated support for stronger privacy law and dramatic enforcement. Americans have rarely been able to vote directly on privacy, but when they do, they overwhelmingly support greater protections. One example comes from a 2002 voter referendum in North Dakota where 73% of citizens voted in favor of establishing opt-in consent protections for the sale of financial records.[1]

A series of surveys performed at Berkeley found that Americans wanted strong penalties for privacy transgressions. When given options for possible privacy fines, 69% chose the largest option offered, “more than $2,500,” when “a company purchases or uses someone’s personal information illegally.” When probed for nonfinancial penalties, 38% wanted companies to fund efforts to help consumers protect their privacy, while 35% wanted executives to face prison terms for privacy violations.

Information is different

The CCPA is unusually stringent compared to other regulatory law because information is different from other kinds of services and products. When a seller makes an automobile or a refrigerator, the buyer can inspect it, test it, and so on. It is difficult for the seller to change a physical product. Information-intensive services however are changeable, they are abstract, and since we have no physical experience with information, consumers cannot easily see the flaws and hazards of them in the way one could see an imperfection in a car’s hood.

Because information services can be changed, privacy laws tend to become stringent. Information companies have a long history of changing digital processes to trick consumers and to evade privacy laws in ways that physical product sellers simply could not.[2] 

Some of the CCPA’s most derided provisions (e.g. application to household level data) are in response to specific evasions of industries made possible because information is different than product regulation. Here are common examples:

  • Sellers claim not to sell personal data with third parties, but then go on to say we “may share information that our clients provide with specially chosen marketing partners.”[3] For this reason, the initiative tightened definitions and required more absolute statements about data selling. Companies shouldn’t use the word “partner” or “service provider” to describe third party marketers.
  • Companies have evaded privacy rules by mislabeling data “household-level information.” For instance, the DMA long argued that phone numbers were not personal data because they were associated with a household.
  • Many companies use misleading, subtle techniques to identify people. For instance, retailers asked consumers their zip code and used this in combination with their name from credit card swipes to do reverse lookups at data brokers.[4]
  • Information companies use technologies such as hash-matching to identify people using “non personal” data.[5]

Careful study of information-industry tricks informed the initiative and resulted in a definitional landscape that attempts to prevent guile. Those complaining about it need only look to the industry’s own actions to understand why these definitions are in place. For your office, this means that regulations must anticipate guile and opportunistic limitations of Californians’ rights.

The advantages of privacy markets

Creating markets for privacy services was a major goal of the initiative. The ability to delegate opt out rights, for instance, was designed so that Californians could pay a for profit company (or even donate to a non-profit such as EFF) in order to obtain privacy services.

There are important implications of this: first, the market-establishing approach means that more affluent people will have more privacy. This sounds objectionable at first, but it is a pragmatic and ultimately democratizing pro-privacy strategy. A market for privacy cannot emerge without privacy regulation to set a floor for standards and to make choices enforceable. Once privacy services emerge, because they are information services and because they can scale, privacy services will become inexpensive very quickly. For instance, credit monitoring and fraud alert services are only available because of rights given to consumers in the Fair Credit Reporting Act that can be easily invoked by third party privacy services. These services have become very inexpensive and are used by tens of millions of Americans.

Some will argue that the CCPA will kill “free” business models and this will be iniquitous. This reasoning underestimates the power of markets and presents free as the only solution to news. The reality is much more complex. Digital advertising supported services do democratize news access, however, they also degrade quality. One cost of the no-privacy, digital advertising model is fake news. Enabling privacy will improve quality and this could have knock-on effects.

Second, the market strategy relieves pressure on your office. The market strategy means that the AG does not have to solve all privacy problems. (That is an impossible standard to meet and perfection has become a standard preventing us from having any privacy.)

Instead, the AG need only set ground rules that allow pro-privacy services to function effectively. A key ground rule that you should promote is a minimally burdensome verification procedure, so that pro-privacy services can scale and can easily deliver opt out requests. For instance, in the telemarketing context, the FTC made enrolling in the Do-Not-Call Registry simple because it understood that complexifying the process would result in lower enrollment.

There is almost no verification to enroll in the Do-Not-Call Registry and this is a deliberate policy choice. One can enroll by simply calling from the phone number to be enrolled, or by visiting a website and getting a round-trip email. What this means is that online, a consumer can enroll any phone number, even one that is not theirs, so long as they provide an email address. The FTC does not run email/phone number verification.

The low level of verification in the Do-Not-Call Registry is a reflection of two important policy issues: first, excessive verification imposes transaction costs on consumers, and these costs are substantial. Second, the harm of false registrations is so minimal that it is outweighed by the interest in lowering consumer transaction costs. Most people are honest and there is no evidence of systematic false registrations in the Do-Not-Call Registry. More than 200 million numbers are now enrolled.

The AG should look to the FTC’s approach and choose a minimally invasive verification procedure for opt out requests that assumes 1) that most Californians are honest people and will not submit opt out requests without authority, and 2) that verification stringency imposes a real, quantifiable cost on consumers. That cost to consumers is likely to outweigh the interest of sellers to prevent false registrations. In fact, excessive verification could kill the market for privacy services and deny consumers the benefit of the right to opt out. A reasonable opt out method would be one where a privacy service delivers a list of identifiable consumers to a business, for instance through an automated system, or simply a spreadsheet of names and email addresses.

The AG should look to Catalog Choice as a model for opt outs. Catalog Choice has carefully collected all the opt out mechanisms for paper mail marketing catalogs. A consumer can sign up on the site, identify catalogs to opt out from (9,000 of them!), and Catalog Choice sends either an automated email or a structured list of consumers to sellers to effectuate the opt out. This service is free. Data feeds from Catalog Choice are even recognized by data brokers as a legitimate way for consumers to stop unwanted advertising mail. Catalog choice performs no verification of consumer identity. Again, this is acceptable, because the harm of a false opt-out is negligible, and because deterring that harm would make it impossible for anyone to opt out efficiently.

I served on the board of directors of Catalog Choice for years and recall no incidents of fraudulent opt outs. The bigger problem was with sellers who simply would not accept opt outs. A few would summarily deny them for no reason other than that allowing people to opt out harmed their business model, or they would claim that Catalog Choice needed a power of attorney to communicate a user’s opt out. The AG should make a specific finding that a power of attorney or any other burdensome procedure is not necessary for delivering verified opt out requests.

The AG should assume that sellers will use guile to impose costs on opt out requests and to deter them. Recall that when consumer reporting agencies were required to create a free credit report website, CRAs used technical measures to block people from linking to it, so that the consumer had to enter the URL to the website manually. CRAs also set up confusing, competing sites to draw consumers away from the free one. The FTC actually had to amend its rule to require this disclosure on all “free” report sites.

The definition of sell

The definition of sell in the CCPA reflects the initiative’s broad policy goal of stopping guile in data “sharing.”

From a consumer perspective, any transfer of personal information to a third party for consideration is a sale (subject to exceptions for transactional necessity, etc). But the information industry has interpreted “sale” to only mean transfers for money consideration. That is an unfounded, ahistorical interpretation.

The initiative sought to reestablish the intuitive contract law rule that any transfer for value is the “consideration” that makes a data exchange a sale. In the information industry’s case, that valuable consideration is often a barter exchange. For instance, in data cooperatives, sellers input their own customer list into a database in exchange for other retailers’ data.[6] Under the stilted definition of “sale” promoted by the information industry, that is not data selling. But from a consumer perspective, such cooperative ”sharing” has the same effect as a “sale.”

Recent reporting about Facebook makes these dynamics clearer in the online platform context.[7] Properly understood, Facebook sold user data to application developers. If application developers enabled “reciprocity” or if developers caused “engagement” on the Facebook platform, Facebook would give developers access to personal data. From a consumer perspective, users gave their data to Facebook, and Facebook transferred user data to third parties, in exchange for activity that gave economic benefit to Facebook. That’s a sale. The AG should view transfers of personal information for value, including barter and other exchange, as “valuable consideration” under the CCPA. Doing so will make the marketplace more honest and transparent.

Disclosures that consumers understand

Over 60% of Americans believes that if a website has a privacy policy, it cannot sell data to third parties.[8]

I have come to the conclusion, based on a series of 6 large scale consumer surveys and the extensive survey work of Alan Westin, that the term “privacy policy” is inherently misleading. Consumers do not read privacy policies. They see a link to the privacy policy, and they conclude “this website must have privacy.” My work is consonant with Alan Westin’s, who over decades of surveys, repeatedly found that most consumers think businesses handle personal data in a “confidential way.” Westin’s findings imply that consumers falsely believe that there is a broad norm against data selling.

In writing consumer law, one can’t take a lawyer’s perspective. Consumers do not act nor do they think like lawyers. Lawyers think the issue is as simple as reading a disclosure. But to the average person, the mere presence “privacy policy” means something substantive. It looks more like a quality seal (e.g. “organic”) rather than an invitation to read.

This is why the initiative and the CCPA go to such extraordinary measures to inform consumers with “Do not sell my personal information” disclosures. Absent such a clear and dramatic disclosure, consumers falsely assume that sellers have confidentiality obligations.

The CCPA is trying to thread a needle between not violating commercial speech interests and disabusing consumers of data selling misconceptions. These competing interests explain why the CCPA is opt-out for data selling. CCPA attempts to minimize impingement on commercial free speech (in the form of data selling) while also informing consumers of businesses’ actual practices.

Let me state this again: the government interest in commanding the specific representation “Do not sell my personal information,” is necessary to both 1) disabuse consumers of the false belief that services are prohibited from selling their data, and 2) to directly tell consumers that they have to take action and exercise the opt out under CCPA. It would indeed make more sense from a consumer perspective for the CCPA to require affirmative consent. But since that may be constitutionally problematic, the CCPA has taken an opt out approach, along with a strong statement to help consumers understand their need to take action. Without a visceral, dramatic disclosure, consumers will not know that they need to act to protect their privacy. Your regulatory findings should recite these value conflicts, and the need for compelled speech in order to correct a widespread consumer misconception.

Data brokers and opting out

Vermont law now requires data brokers to register, and its registry should help Californians locate opt out opportunities. However, the AG can further assist in this effort by requiring a standardized textual disclosure that is easy to find using search engines. Standardized is important because businesses tend to develop arbitrary terminology that has no meaning outside the industry. Text is important because it is easier to search for words than images, and because logo-based “buttons” carry arbitrary or even conflicting semiotic meaning.

Non-discrimination norms

Section §125 of the CCPA is the most perplexing, yet it is harmonious with the overall intent of the initiative to create markets. My understanding of §125 is that it seeks to 1) prevent platforms such as Facebook from offering a price that is widely divergent from costs. For instance, Facebook’s claims its average revenue per user (ARPU) is about $100/year in North America. The CCPA seeks to prevent Facebook from charging fees that would be greatly in excess of $10/month. Thus, the AG could look to ARPU as a peg for defining unreasonable incentive practices. 2) CCPA was attempting to prevent the spread of surveillance capitalism business models into area where information usually is not at play, for instance, at bricks and mortar businesses.

One area to consider under §125 are the growing number of businesses that reject cash payment. These businesses are portrayed as progressive but actually the practice is regressive (consumers spend more when they use plastic, the practice is exclusionary for the unbanked, it subjects consumers to more security breaches, and it imposes a ~3% fee on all transactions).  Consumers probably do not understand that modern payment systems can reidentify them and build marketing lists. The privacy implications of digital payments are not disclosed nor mitigated, and as such, bricks and mortar businesses that demand digital payment may be coercive under CCPA.

Pro-privacy incentives

Privacy laws present a paradox: schemes like the GDPR can induce companies to use data more rather than less. This is because the GDPR’s extensive data mapping and procedural rules may end up highlighting unrealized information uses. The CCPA can avoid this by creating carrots for privacy-friendly business models, something that the GDPR does not do.

The most attractive carrot for companies is an exception that broadly relieves them of CCPA duties. The AG should make the short term transient use exemption the most attractive and usable one. That exception should be interpreted broadly and be readily usable by those acting in good faith. For instance, short-term uses should be interpreted to include retention up to 13 months so long as the data are not repurposed. The broad policy goals of the CCPA are met where an exception gives companies strong pro-privacy incentives. There’s no better one than encouraging companies to only collect data it needs for transactions, and to only keep it for the time needed to ensure anti-fraud, seasonal sales trend analysis, and other service-related reasons. For many businesses, this period is just in excess of one year.

Respectfully submitted,

/Chris Hoofnagle

Chris Jay Hoofnagle*
Adjunct full professor of information and of law
UC Berkeley
*Affiliation provided for identification purposes only

[1] North Dakota Secretary of State, Statewide Election Results, June 11, 2002.

[2] Hoofnagle et al., Behavioral Advertising: The Offer You Can’t Refuse, 6 Harv. L. & Pol’y Rev. 273 (2012).

[3] Jan Whittington & Chris Hoofnagle, Unpacking Privacy’s Price, 90 N.C. L. Rev. 1327 (2011).

[4] Pineda v. Williams Sonoma, 51 Cal.4th 524, 2011 WL 446921.

[5] and

[6] From “co-operative (co-op) database

a prospecting database that is sourced from many mailing lists from many different sources. These lists are combined, de-duplicated, and sometimes enhanced to create a database that can then be used to select prospects. Many co-op operators require that you put your customers into the database before you can receive prospects from the database.

[7] Chris Hoofnagle, Facebook and Google Are the New Data Brokers, Cornell Digital Life Initiative (2018)

[8] Chris Jay Hoofnagle and Jennifer M. Urban, Alan Westin’s Privacy Homo Economicus, 49 Wake Forest Law Review 261 (2014).

2019-03-08T23:50:33+00:00March 8th, 2019|


I’m getting a lot of requests for my syllabi. Here are links to my most recent courses. Please note that we changed our LMS in 2014 and so some of my older course syllabi are missing. I’m going to round those up.

  • Cybersecurity in Context (Fall 2019, Fall 2018)
  • Cybersecurity Reading Group (Spring 2020, Spring 2019, Spring 2018, Fall 2017, Spring 2017)
  • Privacy and Security Lab (Spring 2018, Spring 2017)
  • Technology Policy Reading Group (AI & ML; Free Speech: Private Regulation of Speech; CRISPR) (Spring 2017)
  • Privacy Law for Technologists (Spring 2019, Fall 2017, Fall 2016)
  • Problem-Based Learning: The Future of Digital Consumer Protection (Fall 2017)
  • Problem-Based Learning: Educational Technology: Design Policy and Law (Spring 2016)
  • Computer Crime Law (Fall 2015, Fall 2014, Fall 2013, Fall 2012, Fall 2011)
  • FTC Privacy Seminar (Spring 2015, Spring 2010)
  • Internet Law (Spring 2013)
  • Information Privacy Law (Spring 2012, Spring 2009)
  • Samuelson Law, Technology & Public Policy Clinic (Fall 2014, Spring 2014, Fall 2013, Spring 2011, Fall 2010, Fall 2009)
2020-03-12T22:50:45+00:00April 5th, 2018|

Embrace legitimate cost–benefit analysis, recognize that much of it is not legitimate

From Federal Trade Commission Privacy Law and Policy, Chapter 12:

The FTC is surrounded by critics who urge that Agency actions must be more “rigorous” or based in the “sound economic policy” of cost–benefit analysis. There is some merit to this argument. As Peter Schuck explains, cost–benefit analysis has much to offer policy-makers.39 Consumer advocates remain wary of it for the wrong reasons – they tend to dismiss it categorically. If they took up the challenge, they would quickly find that so many analyses presented as “rigorous” to the Commission are anything but sound. Consider these examples.

Cost–benefit analysis and telemarketing

Former Chairman James Miller, representing the “Consumer Choice Coalition,” appeared at a commission roundtable to deliver a cost–benefit analysis of tele- marketing regulations and the DNCR.40 A major portion of the analysis con- cerned predictive dialers, a technology that allows telemarketers to ring many numbers at once, and then assign a telemarketer to whomever picks up first. This increases efficiencies for telemarketers – to the tune of billions according to Miller et al. Yet, there is also an externalized cost to predictive dialers: so-called “abandoned calls,” the problem of a ringing telephone with no one on the other end, known as a “dead-air” call. Depending on how predictive dialers are configured, they could ring up to sixteen numbers at once, potentially interrupt- ing fifteen people with abandoned calls. The FTC took an extensive record in the proceeding, including from small business owners who commented that a dead- air call cost them real money, because the calls interrupted work. Others felt harassed or even felt fear from the fact that their telephone rang so often with no caller on the line.

The Miller et al. analysis, however, omitted any examination of the costs of this interruption. Miller et al. should have included some miniscule amount to account for the costs of these calls to consumers. The problem was not that people’s time could not be calculated. In the same analysis, Miller et al. looked at pre-acquired account number telemarketing, a practice where the telemarketer buys the consumer’s credit card number in advance of the sale.41 This practice saves con- sumers time at least when a purchase is made, as consumers do not need to reach for their wallet when making purchases. The analysis estimated that sharing billing information with telemarketers saves an average of seventy-five seconds per call. If the information were not shared, they argued, it would impose almost $1.5 billion in costs to telemarketers.

The myopia of Miller et al.’s analysis becomes clear when considering another, unaccounted-for but obvious cost of giving billing information to telemarketers: unauthorized charges. If telemarketers have consumers’ billing information, they can make charges without obtaining consumer consent. Recall from Chapter 9 that a single bank had to process more than 95,000 refunds because of unauthorized pre- acquired account number charges. Because a call to a bank to reverse a charge can cost banks dozens of dollars in call support, fraudulent charges can impose massive costs on consumers and businesses.

Miller’s cost–benefit analysis is typical of the quality of work often presented by academics at the Commission: it only counts costs and benefits that support a deregulatory agenda. Miller et al. could have calculated costs of the externalities of telemarketing practices defended. They chose not to.

Fair Credit Reporting Act (FCRA) amendments

Recall from Chapter 10 that preemption of state law in the FCRA was set to expire in 2004. The financial services industry vigorously sought to renew the preemption, and the FTC obliged. The FTC endorsed the industry’s request for permanent preemption rather than a sunset that would force Congress to revisit consumer reporting some time in the future. The FTC made permanent preemption its first policy recommendation rather than fix then-obvious problems in the FCRA. The FTC’s fawning, uncritical assessment of the financial services industry with the subsequent credit bubble and crash is one of the reasons why Congress created a separate agency for financial consumer protection and stripped the FTC of some of its authority in the area.42

The AEI–Brookings Joint Center for Regulatory Studies produced an economic analysis typical of the era. The author participated in a workshop for the AEI–Brookings monograph, an event where financial services representatives were invited to share their thoughts about the benefits of information sharing. Written by Professors Fred Cate and Michael Staten, along with Robert Litan and Peter Wallison, the monograph valorized the policy goals of the bankers and discounted privacy concerns. Rising household and revolving debt presented “no evidence that they pose any major risk to the banking system.” The quartet praised the rise of subprime lending and securitization. The very rise of credit markets serving the underserved, and the availability of personal information were proof themselves of its goodness and essential soundness.

At the time, privacy advocates wanted more controls over “prescreened” offers of credit, as discussed in Chapter 10. Readily available evidence showed that these offers made it easy to steal others’ identities. Prescreened credit offer processing is often fully automated, meaning that the applications undergo no human review.

One prankster even ripped one up, reassembled it, and managed to get a bank to send him a card at a different address. To remediate this problem, the FTC had to create a “red flag” rule requiring the rather obvious precaution that creditors should exercise care when receiving an altered, forged, or destroyed and reassembled application. Identity theft experts even recommended that consumers opt out of all prescreened offers in order to avoid fraud.

The AEI–Brookings quartet dealt with the fraud problem by parroting the views of the financial services industry. At the time, the industry blamed identity theft on consumers: “it seems that most cases of identity theft involve a friend, family member, or coworker.” What was their support for this position? The group cited a two-page-long statement reflecting the personal experiences of an official from a bank.43 Even that statement, if fully read, undermined the AEI–Brookings position, as the same banker also testified that it is “extremely easy” to commit fraud and this could be done by going through another’s mail. Security breach notification laws would later make the “blame it on the victim” strategy impossible to maintain, as credit card information is regularly stolen and sometimes pressed into the service of fraud. But the security breach laws, a state legislature innovation, came too late for Congress to rethink the FCRA. The FTC could have endorsed a sun-setting preemption, an approach Congress adopted in 1996.44 Congress ultimately enshrined permanent preemption into the FCRA, making it unlikely that the law will be updated unless a major emergency arises.

Public choice critiques

Public choice scholars have generated great insights into the pathologies of government, including the problem of agency capture. Public choice critique is often centered on agencies clearly in need of a shake-up. In the context of the FTC, however, these critiques generally miss the mark and are the products of the author’s established ideological commitment. McCraw observed, “Even in some of the best scholarship on regulation, failure has often been applied not merely as a conclusion but also as a premise, a tacit assumption hidden behind apparently scholarly explanations presented in theoretical forms: the theories of capture, of public choice, of taxation by regulation, and several others.”45 Some public choice work lends insight into the FTC, but some of it borders on conspiracy theory46 or, to put it more generously, simple naıvete.

A Hoover Institution volume published on the FTC from the public choice perspective alleges a long list of pathologies shared between Congress and the FTC. At one point, the editors conclude, “these four papers confirm the suspicions of those who have doubted that independent regulatory agencies such as the FTC are truly independent in formulating and executing regulatory policy.”47 The reality is that no institution is fully independent of its purse strings, and that, in any case, the FTC has rebuffed Congressional objections on many matters. Chapters 1–3 recounted many such examples of independence, including the meatpacking report, investigations into the insurance industry, and the rules surrounding cigarettes, flammable baby blankets,48 funeral practices, telemarketing, and used-car sales.49 When the FTC acts independently, public choice scholars are likely to switch tactics and characterize the Agency as unbalanced.

Methodologically, the Hoover volume, several papers from which are discussed in Chapter 5, relies upon post hoc reasoning;50 ignores counterarguments; assumes that the FTC’s mission is entirely economically motivated, rather than animated by concerns of fairness or small-business interests;51 and otherwise suffers from general weaknesses in methods.52

There are many good reasons to be skeptical of public choice theory narratives of the FTC. Daniel A. Bring, analyzing the inception of the Agency, concluded that it is difficult to identify a prime private-sector beneficiary from the creation of the FTC.53 This lack of primary beneficiary undermines public choice explanations for creation of the FTC. Considering more recent agency activity, over the course of many interviews with FTC staff, Robert A. Katzmann concluded that both popular public choice and liberal critiques of the Agency were too simplistic.

Katzmann focused his lens on the operating reality of the Commission and found that many factors led the Agency to not always act coherently on a policy front. For instance, while economists may think the FTC should bring a certain case, institutional concerns raised by lawyers about losing cases and creating bad precedent could override neatly prescribed economic policy ideals. Katzmann identified other considerations: the atmosphere of collegiality, which sometimes leads to compromise; different attitudes toward taking a reactive or proactive posture regarding problems in the economy; and disagreements between the Bureau of Economics and other arms of the Agency.

In fact, whether led by Republicans or Democratic Commissioners, the Agency has at times urged Congress not to expand its budget, staff, and mission. In not requesting more power, the FTC invites the vitriol of liberal groups.

The public choice notions concerning self-interested attempts to obtain power are also insensitive to the personal reasons why lawyers may forgo very high salaries in the private sector in order to work for a government agency, often for their entire career. Arguments concerning agency capture do not explain the vigorous rulemaking the FTC has pursued in areas including cigarettes, the funeral rule, and telemarketing.

The Beltway libertarians

The “marketplace of ideas” does not soften nor does it correct academic cost– benefit and other analyses for several reasons. Reports prepared for the Commission are sometimes “consulting work” and politely ignored by other faculty. Perhaps guided by lawyerly norms of confidentiality, authors of consult- ing work sometimes do not disclose corporate sponsorship of their research. Mainstream academics may see the work as policy entrepreneurship, and know not to touch it, because engaging with it is not considered serious. Consulting work sometimes is never published, academically or otherwise, and it disappears from the internet shortly after being delivered to the Commission. Consumer groups lack the resources to counter it, in part because it often is presented in the moment, often on the same day of a public event.

The loud parroting of “rigorous” economic analysis by “Beltway libertarians” further skews the public debate. The Beltway libertarians are members of a business liberty movement, one that promotes a radical free-market agenda and equates any policing of business activity as a violation of fundamental freedoms.54 These advo- cates want criminal procedure-like process for regulatory matters. Beltway libertarians are heavily funded by information-intensive companies, and these groups do the dirty rhetorical work that companies cannot without losing face in Washington.

Information policy debates suffer from the same dynamics as public controversies surrounding climate change. In their 2010 book, Merchants of Doubt, Professor Naomi Oreskes and Erik Conway connected the dots among different waves of libertarian-inspired regulatory attacks. The duo show that the same scientists were paid by industries to spread confusion and to oppose a variety of unrelated regulatory efforts concerning tobacco, climate change, and energy policy. The groups that Oreskes and Conway studied are active in information privacy as well. Technology companies, many of which are trying to create sustainably powered data centers and other environmentally responsible infrastructure, fund the same groups that denied any need to take action for the environment.

Beltway libertarians complain about specific FTC actions in hysterical terms, while devoting almost no ink to powers exercised by the state against individuals in their role as citizen. The Beltway libertarians are not civil libertarians; they are commercial libertarians.

As designed by Congress, the FTC was supposed to provide a quick and reliable alternative to litigation in the federal courts. It reflected a spirit from an earlier time in US history, where the FTC could take a less punitive posture, and businesses would swallow its medicine and move on.

Libertarians are able to leverage business frustration with the FTC. At the same time, it is not clear that business patrons of the libertarian movement share55 or understand its radicalism.56 The true libertarian believers would go far beyond big businesses’ agenda, smashing regulations and institutions that main- stream businesses’ support. This happened in the 1980s, when even advertisers were surprised at and opposed the Reagan-administration FTC agenda against substantiation. And it is happening today in a challenge to the FTC’s 2012 case against the hotel chain Wyndham Worldwide Corporation. In the case, the FTC sued the hotel chain after it suffered a series of security breaches. The FTC found that Wyndham was not following basic security precautions, and the FTC found this both deceptive and unfair. In the challenge, the Chamber of Commerce and other conservative legal activists have argued that Section 5 is unconstitutionally vague, that the FTC does not have standing to sue Wyndham under Article III, and that the FTC should have to plead its cases very specifically. Wyndham lost an interlocutory appeal at the Third Circuit on its constitutional claims,57 and, as of this writing, the case continues on in district court.

As frustrated as some technology companies are with the FTC, few would find it wise to turn the regulatory clock back a century. If Wyndham’s constitutional claims had been successful, it would have been a disaster for the business community and for consumers. Wyndham would have beaten an insignificant security case, while the libertarians would have struck at the heart of the regulatory state. All agencies that rely on some “unfairness” authority would be undermined. Their ability to act when Congress is silent, and how they act would all be called into question. Wyndham would be the first footnote in other challenges to tear down social regulations that clash with libertarian worldviews – things like clean air and water regulations, workplace safety rules, and efforts to ensure network neutrality.

Businesses would suffer because a Wyndham victory would require the Commission to engage in rule-making even on issues where there is broad social agreement, such as security of personal information. Problems that all participants acknowledge as actionable would fester in the marketplace, as the Agency struggled with its Magnuson–Moss rule-makings. The result of rule- making would be a security rule similar to the Gramm–Leach–Bliley Act – a two-page-long document that requires companies to have “reasonable” security measures. A Wyndham victory would even harm businesses internationally because it would mean that the FTC could no longer police privacy. Feeding EU frustration, a Wyndham victory would make it impossible for the FTC to enforce any successor to the US–EU Safe Harbor agreement.

Solving the Beltway academic conundrum

The Beltway libertarians’ intent is to slow down government action using a political pretense that has broad political appeal, at least in the abstract. The libertarians paint a patina of limited-government, free-market principle atop an agenda that is other- wise simply self-interested. Recognizing the Beltway libertarians as just an instrument of the business lobby is the first step to disarming them.

The second step is to start helping the FTC fend off the Beltway libertarians. Alone, the FTC cannot do it, in part because of its need to remain in the center during political debates. Practically speaking, the FTC needs consumer advocates and others, including academics, to analyze libertarian work, and give context to its deficits.

Finally, consumer advocates are generally unwilling to acknowledge the role of cost–benefit analysis in regulation, particularly where the right being evaluated is a fundamental or human right. Advocates are also suspicious that calls for cost–benefit analysis are attempts to slow down the Agency. But consumer advocates should overcome resistance to cost–benefit analysis. If consumer advocates only would take up these analyses and engage them seriously, the deficits of these “rigorous studies” would become clear quickly. Furthermore, if cost–benefit analysis took costs – particularly transaction costs – to consumers seriously, such analyses are likely to support consumer protection. As the discussion of telemarketing above showed, the only way the industry could justify its conclusions was to completely omit costs to the consumer in its cost–benefit analysis.

Cost–benefit analysis need not paralyze the Agency. As was shown in the 2015 Wyndham case, the Third Circuit did not require extensive economic analysis to come to the obvious conclusion that Wyndham should have invested in more security precautions.58 Courts more generally will defer to the Agency’s expertise and will not require it to engage in make-work.

40 James C. Miller III, Jonathan S. Bowater, Richard S. Higgins, & Robert Budd, An Economic Assessment of Proposed Amendments to the Telemarketing Sales Rule, June 5, 2002 (on file with author).
41 See Chapter 9.
42 Kirstin Downey, FTC at 100: The Agency in recent times to infinity and beyond!, 869 FTC:WATCH (March 13, 2015).
43 Identity Theft, Hearings before the House Subcommittees on Telecommunications; Commerce, Trade, and Consumer Protection; and on Environment and Hazardous Materials of the House Committee on Commerce, 106 Cong. 1 sess. (April 22, 1999) (Statement of Charles A. Albright, chief credit officer, Household International, Inc.).
44 Paul M. Schwartz, Preemption and Privacy, 118(5) YALE L. J. 902 (2009).
46 The author has encountered the following reasoning in researching this book: The FTC is the agent
of large enterprises, creating regulations to help large businesses squash small ones; the FTC uses small-business rhetoric to justify actions that help large businesses; the FTC Act was passed to benefit monopolies and help them control smaller, cutthroat firms; the FTC employees increase complexity of regulations so that they can command high prices when they leave the Agency for private practice, etc.
47 Robert J. Mackay, James C. Miller III, & Bruce Yandle, Public Choice and Regulation: An Overview, in PUBLIC CHOICE & REGULATION: A VIEW FROM INSIDE THE FEDERAL TRADE COMMISSION (Robert J. MacKay, James C. Miller III, & Bruce Yandle eds., 1987).
48 After the Flammable Fabrics Act was amended to strengthen it, a powerful Congressman (a member of the appropriations committee) threatened the Agency, urging it to make a determination that baby blankets are not clothes, and thus not subject to the act’s requirements. Then-Chairman Paul Rand Dixon bent to the will of the member, concerned that the FTC’s funding would dry up otherwise. Commissioner Philip Elman dissented from the Agency’s position, and publicity surrounding it caused baby blankets to be considered clothing for purposes of the act. NORMAN I. SILBER, WITH ALL DELIBERATE SPEED, THE LIFE OF PHILIP ELMAN (2004).
49 “At one time or another during the Commission’s legislative travail, at least one congressional committee or house voted overwhelmingly to abort virtually every major FTC rule making, case or investigation that that had aroused the concern of affected industries or even individual companies.” MICHAEL PERTSCHUK, REVOLT AGAINST REGULATION: THE RISE AND PAUSE OF THE CONSUMER MOVEMENT 73 (1982).
50 An article in this volume concludes that advertising substantiation enriched large advertising firms by analyzing such companies’ stock prices after mention of the substantiation doctrine in the Wall Street Journal. However, firms were engaging in substitution years before the study period, and other factors could explain why large advertising firms profited during the study period.
51 An article in this volume alleges that large advertisers favor substantiation because it hurts smaller rivals. While for some that may be true, it could just as well be true that these advertisers are offended by false advertising and believe that it hurts the entire industry.
52 An article in this volume concludes that FTC fines are regressive and anti-small business, but the study period ends in 1981 (when the coeditor of the volume James C. Miller takes the helm of the FTC), while the work was not published until 1984. The N is only 57, and the authors note that most cases in the study period were brought against large businesses. The cutoff date for the study is especially strange, given that the Miller FTC reversed course and sued far more smaller advertisers (thus targeting smaller businesses). Ross Petty, in his 1992 study of the FTC, found that the Reagan-era leadership focused on national advertisers in only 10 percent of cases, but that, overall, case selection was much more rational and effective. FTC Advertising Regulation: Survivor or Casualty of the Reagan Revolution?, 30(1) AMER. BUS. L. J. 1 (1992).
53 Danny A. Bring, The Origins of the Federal Trade Commission Act: A Public Choice Approach (1993) (Ph.D. dissertation, George Mason University).
54 Many Beltway libertarians are groomed by George Mason University’s (GMU) School of Law. Professor Steven Teles frames GMU’s law school as a project of activist conservative legal thinkers who found that placing scholars in top schools resulted in them becoming more moderate. GMU was a place where “libertarian professors could hone their ideas without the compromises associated with elite institutions.” STEVEN M. TELES, THE RISE OF THE CONSERVATIVE LEGAL MOVEMENT (2008). GMU has been used as a kind of academic front for Google’s activities. See Tom Hamburger & Matea Gold, Google, Once Disdainful of Lobbying, Now a Master of Washington Influence, WASH. POST, April 12, 2004. (“Facing a broad and potentially damaging FTC probe, Google found an eager and willing ally in George Mason University’s Law & Economics Center.”)
55 The idea of an anything-goes, rugged-individualist California ignores the role of big government, railroads, and aerospace/defense companies in its creation; see JOAN DIDION, WHERE I WAS FROM (2003); GERALD NASH, THE FEDERAL LANDSCAPE: AN ECONOMIC HISTORY OF THE TWENTIETH-CENTURY WEST (1999). (“The size and scale of the new federal [military] establishments were unprecedented. Congress poured more than $100 billion into western installations between 1945 and 1973 . . . The military-industrial complex was the West’s biggest business in the cold war years.”)
56 See, for example, Paulina Borsook’s 2000 book in which she observed that the “technolibertarian” movement fundamentally rejects social contract. PAULINA BORSOOK, CYBERSELFISH: A CRITICAL ROMP THROUGH THE TERRIBLY LIBERTARIAN CULTURE OF HIGH-TECH (2000).
57 FTC v. Wyndham Worldwide Corp., No. 14-3514 (3d. Cir. Aug. 24, 2015).

58 FTC v. Wyndham Worldwide Corp., No. 14-3514 (3d. Cir. Aug. 24, 2015).


2019-02-09T16:15:36+00:00February 28th, 2018|

Bait-and-switch advertising, bait-and-switch privacy

From Federal Trade Commission Privacy Law and Policy, Chapter 12:

Professor Ross Petty highlighted that some of Posner’s objections reflected unstated assumptions and that recognizing these help us see the deficits of the Posner critique.27 For instance, Posner dismissed the Agency’s bait-and-switch advertising work without explaining his underlying objection to policing such marketing techniques. For Posner, bait advertising was merely a good-natured way to attract consumers to the store.28 In recent years, bait-style advertising has become the main tool for information-intensive companies to pry personal information from customers.

When Posner wrote his critique, bait advertising was rightly a major concern of the Commission. By the 1950s, the Commission had developed a nuanced under- standing of bait and misleading-discount issues. It distinguished between loss leaders, which are low-priced goods that are readily sold, and bait advertisements, for products that the seller would sell only reluctantly.

Posner also rejected the idea that companies would systematically abuse the poor, in part because of “the absence of theoretical reasons for expecting fraud to be rampant in sales to the poor.”29 While Posner in this early work characterized consumer poverty initiatives as based on anecdote and thin data, he overlooked the Commission’s then-recent Washington DC initiative. In it, the FTC discovered that bait advertising was one of the principal deceptive tactics employed against poor consumers in the marketplace.30 The FTC found that consumers would be told the product was sold out or was only available at another store branch. This was a major imposition for poor consumers without cars.

In Posner’s dismissal of the bait advertising problem, we see the limits of his and derivative critiques of the FTC. To borrow Arthur Leff’s phrase, Posner’s critique suffered from a kind of tunnel vision.31 A broader lens brings into focus problems of transaction costs, lock-in, collective action problems, and decision-making biases that lead consumers to uneconomical decisions. Moreover, the electronic market- place, although presented as a forum where “competition is a click away,” has intensified these problems in some ways. Remarkably profitable fraud schemes rely on the basic premise that individuals are busy and may not be entirely focused on details. Thus, one can capture credit card numbers and make millions in charges for products that are never shipped,32 or fake fees can be invented that the consumer assumes has to be paid.33 Perhaps no rational person should pay these charges, yet consumers do.34

Bait and switch also provides a framework to think about information privacy problems. The website that lures consumers with various free services or other promises but that later switches and adopts privacy-invasive practices35 is now a trope in our economy. The switch toward privacy invasion is made possible because of the kinds of consumer behaviors and information economics that exist outside some “tunnels.” These behaviors include the reliance on brand instead of more objective factors in decision-making, the power of network effects, lock-in, and the absence of viable privacy-friendly alternatives.

Facebook provides a good example of digital bait. What was innovative about Facebook in 2004? Several other social networks existed with similar information- sharing features and profile linkages. The problem with social networking at the time came from MySpace’s messy design, the ability of undesirable users to contact and troll others, and network outages that crippled rival network Friendster. Facebook’s real innovation was its marketing, not its technology. The company created a social network based around trusted, existing social groups such as the students of a specific college. It initially was premised on exclusivity, leveraging the reputation of the Ivy League. But as users joined, it relaxed membership requirements, from top-tier private colleges to, eventually, anyone.

Facebook is an information-age bait and switch. Having lured users into its network, it substituted the advertised product for another. The company changed its disclosure settings, making user profiles dramatically more public over time, while masking its own economic motives36 with claims that users wanted to be “more open.” By the time Facebook made its major privacy changes in 2009, it had such a command of the market that users could not defect. Since then, thoughtful, well-designed alternatives to Facebook have been released. Yet these efforts always fail because of the power of Facebook’s network and switching costs.

Google’s history too could be seen as a policy bait-and-switch. Google entered the search engine market wearing its opposition to obtrusive advertising and to advertising-influenced search results on its sleeve. The company’s founders promised a revolution in both search and advertising. Google even presented its search service as more privacy-protective than competitors because it did not take users’ browsing history into account when delivering search results.

Today, it seems that Google’s advertising policy is in a counterrevolutionary period. It quietly started using behavioral data in search without telling the public. It runs paid search ads prominently at the top of organic search results – mimicking the very thing it considered evil in the 1990s. Google even uses television-like commercials. But these are more invasive than those on television because Google’s technology tracks the user and can tell whether the user is watching. Prior to the internet, one could always go to the restroom during the television commercial break. Someday soon, will Google watch users through their webcams and pause ads when the user visits the bathroom or averts their gaze?

Both Facebook and Google are a kind of privacy long con. The services roped users into a relationship that was promised to be different than competitors. Over time, both companies changed policies and mimicked their competitors. Just as the FTC developed expertise to address the bait-and-switch tactics of retailers in the 1950s, today’s agency needs to focus on the modern version of this problem. Modern privacy bait-and-switches occur over longer periods of time and leverage network effects and lock-in.

  • 27  Ross D. Petty, FTC Advertising Regulation: Survivor or Casualty of the Reagan Revolution?, 30(1) AMERICAN BUSINESS L. J. 1 (1992).
  • 28  Compare Arthur Leff: “Once there you had already spent time, labor, and money to go there rather than elsewhere. That you got a ‘fair’ deal there means little; you were defrauded of the ‘sunk cost’ of going there rather than elsewhere the minute you went.” ARTHUR ALLEN LEFF, SWINDLING AND SELLING (1976).
  • 29  Richard A. Posner, The Federal Trade Commission, 37(1) UNIV. CHI. L. REV. 47 (1969).
  • 31  Arthur A. Leff, Economic Analysis of Law: Some Realism about Nominalism, 60 VIRGINIA L. REV. 451 (1974).
  • 32  FTC v. Sun Spectrum Communications Organization, Inc., No. 03-8110 (S.D. Fl. 2005).
  • 33  FTC v. AmeriDebt, Inc. et al., No. PJM 03-3317 (D. Md. 2014).
  • 34  FTC v. API Trade, LLC et al., No. 110-cv-01543 (N.D. Ill. 2010).
  • 35  Professor Paul Ohm has termed this the privacy “lurch.” Paul Ohm, Branding Privacy, 97 MINN. L. REV. 907 (2013). PAOLA TUBARO, ANTONIO CASILLI, & YASAMAN SARABI, AGAINST THE HYPOTHESIS OF THE END-OF-PRIVACY. AN AGENT-BASED MODELLING APPROACH TO SOCIAL MEDIA (2014).
2019-02-09T15:48:59+00:00February 19th, 2018|

Reinvigorating Consumer Protection

Professor Mark Nadel suggests four core challenges to making consumer protection a widespread concern: First, consumer interests are diffuse; it is a collective value and must compete with other values. Second, individuals have varying levels of intensity of interest in consumer matters. Mostly, this is a low- level interest, and it can be satisfied with emotional or psychological appeals. Thus, individuals can be satisfied with symbolic consumer protection. But this is a double-edged sword. When symbolic protections are stripped away, individuals can react intensely.

Third, there is a gulf in consumer protection between objective and perceived needs. This leads to a focus on dramatic problems and a lack of attention to more structural, difficult problems.

Fourth, the “consumer interest” is difficult to define…


2019-02-02T14:40:19+00:00February 2nd, 2018|

Should Regulation Be “Technology Neutral”?

Both consumer and industry advocates argue that regulation should be “technology-neutral.”

Bert-Jaap Koops has carefully explored the demand for technology-neutral regulation. He begins the inquiry by asking why technology law in particular should be technology-neutral. Koops uses the example of traffic law, where there are no calls for uniform technical rules for bicycles, cars, and heavy trucks.

Koops goes on to explain that “technical neutrality” carries three possible meanings, and these meanings can result in conflict:

From the perspective of the goal of regulation, the statement stresses that, in principle, the effects of ICT should be regulated, but not technology itself; it may thus serve as a means to achieve equivalence between off-line and on-line regulation. From the perspective of technology development, the statement stresses that, in principle, regulation should not have a negative effect on the development of technology and should not unduly discriminate between technologies. From the perspective of legislative technique, [technology neutrality] stresses that legislation should abstract away from concrete technologies to the extent that it is sufficiently sustain- able and at the same provides sufficient legal certainty.

Koops argues that the last justification is the most meritorious to promote sustainable lawmaking.

Applied to US policy debates, these three meanings of technologically neutral regulation have very different outcomes. Consider Koops’ first and third categories. These would militate for broad, preventative, principles-based legislation.

Statutes such as the Fair Credit Reporting Act and the Video Privacy Protection Act, which define prohibited behaviors regardless of technology used, may qualify under this definition of technical neutrality. At the same time, laws such as the Communication Decency Act’s immunity for online platforms would be suspect, because it creates radically different outcomes for online and offline intermediary liability.

Much US regulation, and virtually all self-regulation, violates the principles of Koops’ second definition, which commands that technologies should not be discriminated against. In American parlance, this is often said as “regulation should not pick winners and losers.”

However, almost all anti-marketing regulation is technology-specific, in the sense of picking winners and losers, as are most self-regulatory regimes in privacy. This is because of First Amendment constraints, which require the government to tailor regulation to the affordances of specific technologies. As a result, regulation tends to be reactive, rather than proactive, and target specific technologies. But it is also because of power dynamics. Automatic dialers, for instance, greatly enhanced the ability of telemarketers to call individuals, resulting in millions of “dead-air” hang-up calls. Regulation targets this technology for its power transfer to telemarketers. It is asymmetric as it greatly increases telemarketer efficiency while giving the consumer no tools (or even disabling their tools) to counter the interruption. Prerecorded voice marketing presents a similar, asymmetric threat to individuals. With minimal investment, a caller can cause massive interruption in individuals’ daily lives.

Herbert Burkert argues that information communication technologies have fundamentally altered information handling. To maintain checks and balances in society, Burkert asserts that technology-specific “responses to such changes in the power structure is needed.”51 He points to the electronic data processing industry as one with practices more dangerous than paper file systems, and thus deserving of stronger regulation.

We might think about these lessons in coming decades, when marketing is likely to be delivered to us by robots or by automated systems that can recognize and confront us in real space. Such advertising is typically depicted in dystopian science fiction (the 2002 movie Minority Report, the 2011 series Black Mirror, and the 2013 work The Zero Theorem). We know that these technologies are coming, and we are likely to react to them in a non-neutral way, regulating each specifically as they arise, rather than prospectively, through principles- based neutral regulations.


Bert-Jaap Koops, Should ICT Regulation Be Technology-Neutral?, in STARTING POINTS FOR ICT REGULATION (Bert-Jaap Koops, Mariam Lips, Corien Prins, & Maurice Schellekens, eds., 2006).

Herbert Burkert, Four Myths about Regulating in the Information Society – A Comment, in STARTING POINTS FOR ICT REGULATION (Bert-Jaap Koops, Mariam Lips, Corien Prins, & Maurice Schellekens, eds., 2006).

2019-02-02T14:18:39+00:00February 2nd, 2018|

Is Technology “Neutral”?

A kind of paradox is presented by modern technophiles. In the same breath they declare that technology is neutral while touting technology as the actuator of pro-democratic political change.37 In the academic community, many have argued that technology is not neutral but rather a profound “part of our very humanity.”38

In The Whale and the Reactor, Langdon Winner invites the reader to consider the political dimensions of technologies that generate electricity. A society that adopts nuclear power must also have a military-like police force to protect spent rods and by-products of atomic power from misuse. It must have extensive security to prevent a terrorist from flying a plane into the reactor or otherwise triggering a meltdown. Nuclear power distribution is centralized and owned by just a few people, so there are profound economic implications as well.

On the other hand, a society that adopted home solar power would have less of a need for a strong police force. Power generation and ownership would be decentralized and probably impossible to monitor. Simply put, nuclear energy requires a different set of political relationships, and thus Winner labeled it an inherently political technology. Winner suggests other examples of discrimination in design that “enhance the power, authority, and privilege of some over others,” including the allegation that Robert Moses built low bridge overpasses to prevent city buses (and thus the urban poor) from visiting Jones Beach.

Understood in this way, claims that “technology is neutral” may be a technique to mask the political motives of technology companies: “All too often the design of technologies simply conceals the ideologies and political agendas of their creators.”39 Evgeny Morozov thus recommends that policy “clearly scrutinize both the logic of technology and the logic of society that adopts it . . .”40

37 ERIC SCHMIDT AND JARED COHEN, THE NEW DIGITAL AGE: RESHAPING THE FUTURE OF PEOPLE, NATIONS AND BUSINESS (2013). (The authors say such things as “technology is neutral but people are not” and “Technology companies export their values along with their products, so it is absolutely vital who lays the foundation of connectivity infrastructure.”)

38 LANGDON WINNER, THE WHALE AND THE REACTOR (1986). See also Gary T. Marx, Coming to Terms and Avoiding Information Techno-Fallacies, in PRIVACY IN THE MODERN AGE: THE SEARCH FOR SOLUTIONS (Marc Rotenberg & Jermaine Scott eds., 2015).

39  EVGENY MOROZOV, THE NET DELUSION (2011); Evgeny Morozov, Don’t Be Evil, THE NEW REPUBLIC (August 4, 2011).


2019-02-02T14:21:23+00:00February 1st, 2018|

Chairman Pertschuk’s lessons on regulation

Chairman Michael Pertschuk was one of the most qualified FTC leaders ever. Educated at Yale Law School, he clerked for a federal district judge, practiced at a firm, and then spent fifteen years on Capitol Hill. His Hill experience brought him great expertise in consumer protection, as he was chief counsel to the Senate Commerce Committee during the expansion of consumer rights in the 1970s.

Pertschuk led the FTC during its most controversial years. In his 1982 book, Revolt against Regulation, he gave a personal account of lessons learned from the newfound skepticism of government regulation.55 He offered consumer advocates seven lessons in consumer regulation. They should ask:

  • Is the rule consonant with market incentives to the maximum extent feasible?
  • Will the remedy work?
  • Will the chosen remedy minimize the cost burdens of compliance, consistent with achieving the objective?
  • Will the benefits flowing from the rule to consumer or to competition substantially exceed the costs?
  • Will the rule or remedy adversely affect competition?
  • Does the regulation preserve freedom of informed individual choice to the maximum extent consistent with consumer welfare?
  • To what extent is the problem appropriate for federal intervention and amendable to a centrally administered national standard?

Pertschuk’s book is an anomaly for Washington memoirs, which typically involve some trope about “reforming Washington,” with failures attributed to intractable “bureaucracies” and the like. Pertschuk wrestles with questions fundamental to whether consumer protection is effective, and declares that his experience taught him the (albeit limited) value of cost–benefit analysis


2019-02-02T14:36:41+00:00January 2nd, 2018|

DOC: No Records on Privacy Shield Removal Procedure

Back in November, I posted the Department of Commerce’s Privacy Shield checklist. The next logical step was to request DOC’s procedures for removal of companies from the Privacy Shield (submitted Dec. 1). Today, DOC-International Trade Administration responded with a “no records” response. It is not clear to me what date the search took place, and ITA is careful to say that their search did not include non-ITA Commerce elements. I’m following up on that.

2017-04-14T09:10:17+00:00April 14th, 2017|