Several months after joining the Commission as director of consumer protection, David Vladeck gave an interview to the New York Times, in which he invoked the d-word – dignity – four times. In describing his role at the Commission, he said, “I think there’s a huge dignity interest wrapped up in having somebody looking at your financial records when they have no business doing that. I think there is a dignity interest that needs to be protected when someone’s looking at, maybe, your prescription medications that you’re getting online. I don’t think the harm model that the Commission has used at times really captures those injuries.”26
Supporters of the harm-based approach felt deeply threatened by the idea that the FTC would use dignity as a case selection factor. Harm supporters reacted hysterically, labeling Vladeck’s views emotional, questionable, vague, nontraditional, and subjective. They warned of expanding liability, of influence from foreign legal interests, and so on. In critiquing Vladeck, harms-based supporters almost always put dignity in quotes, as if it were some Germanism. Even commission officials tried to soften the interpretation of Vladeck’s use of the word “dignity.”27
Maintaining dignity is a main reason why people seek privacy. Consider the lock on the bathroom or bedroom door as mechanisms that protect non-economic interests in shielding the naked body from observation. The Commission has taken action in several cases where business practices enabled spying into the home and the capture of images of people within their homes. Such spying does not cause an obvious economic harm to people, yet most people would support having the government defend against such intrusions.
Why would the idea of dignity be so alien to privacy? And if dignity means the idea of protecting a person’s honor or worth, why would the business community be so threatened by it?
26 NEW YORK TIMES, AN INTERVIEW WITH DAVID VLADECK, August 5, 2009.
27 Thumbs Down to Notice-and-Choice at FTC, But Firm Rules Not Planned, 11(76) WARREN’S WASHINGTON INTERNET DAILY, April 11, 2010.
Professor Kenneth Rogoff’s Curse of Cash convincingly argues that we pay a high price for our commitment to cash: Over a trillion dollars of it is circulating outside of US banks, enough for every American to be holding $4,200. Eighty percent of US currency is in hundred dollar bills, yet few of us actually carry large bills around (except perhaps in the Bay Area, where the ATMs do dispense 100s…). So where is all this money? Rogoff’s careful evidence gathering points to the hands of criminals and tax evaders. Perhaps more importantly, the availability of cash makes it impossible for central banks to pursue negative interest rate policies—because we can just hoard our money as cash and have an effective zero interest rate.
What to do about this? Rogoff does not argue for a cashless economy, but rather a less cash economy. Eliminate large bills, particularly the $100 (interesting fact–$1mm in 100s weighs just 22 pounds), and then moving large amounts of value around illegally becomes much more difficult. Proxies for cash are not very good—they are illiquid, heavy, or easily detectable. And what about Bitcoin?—not as anonymous as people think. Think Rogoff’s plan is impossible? Well, India Prime Minister Modi just implemented a version of it, eliminating the 500 and 1,000 rupee notes.
As you might imagine, Rogoff’s proposal angers many privacy advocates and libertarians. His well written, well informed, and well argued book deserves more than its 2 stars on Amazon.
My critique is a bit different from the discontents on Amazon. I think Rogoff’s proposal offers a good opportunity to think through what consumer protection in payments systems might look like in a less-cash world—this is a world I think we are entering. Yet, Rogoff’s discussion shows a real lack of engagement in the payments and especially the privacy literature. For Rogoff’s proposal to be taken seriously, we need to revamp payments to address the problems of fees, cybersecurity, consumer protection, and other pathologies that electronic payments exacerbate.
The Problem of Fees
One immediately apparent problem is that as much as cash contributes to crime and tax evasion, electronic payments contribute to waste as well, in different ways. The least obvious is the cartel-like fees imposed by electronic payments providers. All consumers—including cash users—subsidize the cost of electronic payments, and the price tag is massive. In the case of credit cards, fees can be as high as 3.5% of the transaction. I know from practice that startups’ business models are sometimes shaped around the problem of such fees. Fees may even be responsible for the absence of a viable micropayment system for online content.
Fees represent a hidden tax that a less-cash society will pay more of, unless users are transitioned to payment alternatives that draw directly from their bank accounts. Rogoff seems to implicitly assume that consumers will chose that alternative, but it is not clear to me that consumers perceive of the fee difference between standard credit card accounts and use of debit or ACH-linked systems. For many consumers, especially more affluent ones, the obvious choice is to choose a credit card, pay the balance monthly, and enjoy the perks. Rogoff’s policy then means more free perks for the rich that are subsidized by poorer consumers.
Taking Cybercrime Seriously
Here’s a more obvious crime problem—while Rogoff is quick to observe that cash means that cashiers will skim, there is less attention paid to the kinds of fraud that electronic payments enable. Electronic payment creates new planes of attack for different actors who are not in proximity to the victims. A cashier will skim a few dollars a night, but can be fired. Cybercriminals will bust out for much larger sums from safe havens elsewhere in the world.
The Problem of Impulsive Spending and Improvidence
Consumers also spend more when they use electronic payments. And so a less cash society means that you’ll have…less money! Cash itself is an abstract representation of value, but digital cash is both an abstraction and immaterial. One doesn’t feel the “sting” of parting with electronic cash. In fact, there is even a company making a device to simulate parting with cash to deter frivolous spending.
The Problem of Cyberattack
Rogoff imagines threats to electronic payment as power outages and the like. That’s just the beginning. There are cybercriminals who are economically motivated, but then there are those who just want to create instability or make a political statement. We should expect attacks on payments to affect confidentiality, integrity, and availability of services, and these attacks will come both from economically-motivated actors, to nation states, to terrorists simply wanting to put a thumb in the eye of commerce. The worst attacks will not be power-outage-like events, but rather attacks on integrity that undermine trust in the payment system.
Moving From Regulation Z to E
The consumer protection landscape tilts in the move from credit cards to debit and ACH. Credit cards are wonderful because the defaults protect consumers from fraud almost absolutely. ACH and debit payments place far more risk of loss onto the consumer, theoretically, more risk than even cash presents. For instance, if a business swindles a cash-paying customer, that customer only loses the cash actually transferred. In a debit transaction, the risk of loss is theoretically unlimited unless it is noticed by the consumer within 60 days. Many scammers operate today and make millions by effectuating small, unnoticed charges against consumers’ electronic accounts.
The Illiberal State; Strong Arm Robbery
Much of Rogoff’s argument depends on other assumptions, ones that we might not accept so willingly anymore. We currently live in a society committed to small-l liberal values. We have generally honest government officials. What if that were to change? In societies plagued with corruption and the need to bribe officials, mobile payments become a way to extract more money from the individual than she would ordinarily carry. Such systems make it impossible to hide how much money one has from officials or in a strong-arm robbery.
Paying Fast and Slow
Time matters and Rogoff is wrong about the relative speed of payment in a cash versus electronic transaction. Rogoff cites a 2008 study showing that debit and cash transactions take the same amount of time. This is a central issue for retailers and large ones such as Wal-Mart know to the second what is holding up a line, because these seconds literally add up to millions of dollars in lost sales. Retailers mindful of time kept credit card transaction quick, but with the advent of chip transactions, cash clearly is the quickest method of payment. It is quite aggravating to wait for so many people charging small purchases nowadays.
Mobile might change these dynamics–not not anytime soon. Bluetooth basically does not work. To use mobile payments safely one should keep their phone locked. So when you add up the time of 1) unlocking the phone, 2) finding the payment app, 3) futzing with it, and 4) waiting for the network to approve the transaction, cash is going to be quicker. These transaction costs could be lowered, but the winner is going to be the platform-provided approaches (Apple or Android) and not competitive apps.
Privacy is a final area where Rogoff does not identify the literature or the issues involved. And this is too bad because electronic payments need not eliminate privacy. In fact, our current credit card system segments information such that it gives consumers some privacy: Merchants have problems identifying consumers because names are not unique and because some credit card networks prohibit retailers from using cardholder data for marketing. The credit card network is a kind of ISP and knows almost nothing about the transaction details. And the issuing and acquiring banks know how much was spent and where, but not the SKU-level data of purchases.
The problem is that almost all new electronic payments systems are designed to collect as much data as possible and to spread it around to everyone involved. This fact is hidden from the consumer, who might already falsely assume that there’s no privacy in credit transactions.
The privacy differential has real consequences for privacy that Rogoff never really contemplates or addresses. It ranges from customer profiling to the problem that you can never just buy a pack of gum without telling the retailer who you are. You indeed may have “nothing to hide” about your gum, but consider this—once the retailer identifies you, you have an “established business relationship” with that retailer. The retailer than has the legal and technical ability to send you spam, telemarketing calls, and even junk fax messages! This is why Jan Whittington and I characterized personal information transfers as “continuous” transactions—exchanges where payment doesn’t sever the link between the parties. Such continuous transactions have many more costs than the consumer can perceive.
Professor Rogoff’s book describes in detail how cash leads to enabling more crime, paying more taxes, and how it hobbles our government from implementing more aggressive monetary policy. But the problem is that the proposed remedy suffers from a series of pathologies that will increase costs to consumers in other ways, perhaps dramatically. So yes, there is a curse of cash, but there are dangerous and wasteful curses associated with electronic payment, particularly credit.
The critiques I write here are well established in the legal literature. Merely using the Google would have turned up the various problems explained here. And this makes me want to raise another point that is more general about academic economists. I have written elsewhere that economists’ disciplinarity is a serious problem, leading to scholarship out of touch with the realities of the very businesses that economists claim to study. I find surprisingly naive works by economists in privacy who seem immune to the idea that smart people exist outside the discipline and may have contemplated the same thoughts (often decades earlier). Making matters worse, the group agreement to observe disciplinary borders creates a kind of Dunning–Kruger effect, because peer review also misses relevant literature outside the discipline. Until academic economists look beyond the borders of their discipline, their work will always be a bit irrelevant, a bit out of step. And the industry will not correct these misperceptions because works such as these benefit banks’ policy goals.
What can fit in 2170×60?
🙂 Thank you, jonathanhher for your Nyan Cat logo.
Behold the newest self-regulatory group, the “Coalition for Better Ads,” which claims that it will, “improve consumers’ experience with online advertising. The Coalition for Better Ads will leverage consumer insights and cross-industry expertise to develop and implement new global standards for online advertising that address consumer expectations.” How? They will:
Create consumer-based, data-driven standards that companies in the online advertising industry can use to improve the consumer ad experience
In conjunction with the IAB Tech Lab, develop and deploy technology to implement these standards
Encourage awareness of the standards among consumers and businesses in order to ensure wide uptake and elicit feedback
The Coalition will draw upon consumer research in shaping the standards.
What are “better” ads? Certainly more secure ads would be welcome, in the sense that modern web advertising is not a billboard but rather code that can introduce insecurity. But what about privacy? Wouldn’t it make sense for ads to be more respectful of users privacy? How about advertisers’ use of data brokers to merge data online and off–something that NAI promised would not happen back in 2000?
These are dangerous questions to ask. So dangerous, that the fearless leaders of Facebook wouldn’t even ask them. Recall last month when Facebook announced it would circumvent ad blockers? Facebook’s Andrew Bosworth wrote:
For the past few years at Facebook we’ve worked to better understand people’s concerns with online ads. What we’ve heard is that people don’t like to see ads that are irrelevant to them or that disrupt or break their experience. People also want to have control over the kinds of ads they see.
Well, I think those conclusions are correct. Obviously no one wants disruptive ads–the emergence of the popup blocker is testimony to that. And if you are going to have advertising, you might as well have relevant ads. The elephant in the room is privacy–how did a company that tracks people on about 40% of the public web, intermediates the conversations, and tracks them physically not raise privacy issues? The answer is that Facebook didn’t ask about privacy.
Turning to the Coalition for Better Ads, it did not mention privacy anywhere in its discussion of ads. Nor did Pagefair in its 2015 study of ad blocking, nor did IAB’s primer on ad blocking. The closest that any ad group will get to the question appears to be Secret Media, which in a 2016 report wrote, “It is our hypothesis that advertising technologies are negatively impacting publisher websites and causing users to be frustrated by slow page load, tracking that exploits personal data, and the over exposure to ads.”
“Chris Hoofnagle has written the definitive book about the FTC’s involvement in privacy and security. This is a deep, thorough, erudite, clear, and insightful work – one of the very best books on privacy and security.”
“A landmark work for anyone interested in privacy or consumer protection law.”
“This well-written, comprehensive history of the Federal Trade Commission shows once again the primary importance the agency has played in shaping the regulatory environment of the United States. It is essential reading for anyone who deals regularly with the FTC, and is a good primer for those coming in contact with the agency for the first time. Clear, thoughtful and engaging.”
“A timely and insightful analysis of the FTC as a key actor in protecting information privacy. The historical context provides a solid basis for Hoofnagle’s well-supported policy recommendations.”
“A welcome perspective on challenges facing a great agency designed to “rein in” the American market.”
“Hoofnagle masterfully distills and concentrates the major steps in the development of the FTC’s consumer protection authority…This is a serious work of historical scholarship.”
“This book offers a fascinating, informed exploration into the dangers of the Internet and the problems and potentials of the FTC in effectively dealing with them. It is well worth our attention.”
“Chris Hoofnagle has done an enormous public service by writing a comprehensive and critical guide to the Federal Trade Commission’s consumer protection efforts, which started over a century ago in reaction to a changing economy and industrialization […] we could not ask for a better primer than this incisive and informative book.
“Chris Hoofnagle has put together an impressive, authoritative and useful treatise on the law of consumer privacy in the U.S. and the role of the Federal Trade Commission. This book is an excellent read for all those interested in consumer privacy, and should prove to be a valuable resource for years to come.”
“This book succeeds as a work of history, a deep analysis of law and institutions, and advocacy for a better regime for the key issue of our times.”
…Through his analysis of the role played by the courts, Congress, and the Commission itself, he illustrates the doctrines and dynamics that have contributed to shaping this agency. This makes the book a valuable tool for European privacy experts who wish to better understand the US regulatory approach to privacy protection and understand how political and social forces have affected the powers given to the Commission.
…Overall, Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy is a fascinating read and a treasure trove of useful references for further research.
Federal Trade Commission Privacy Law and Policy (FTCPL&P) is my 2016 book on the FTC. It is really two books. The first part details the agency’s consumer protection history from its founding, and in so doing, it sets the context for the FTC’s powers and how it is apt to apply them. The book has an institutional analysis discussing the internal dynamics that shape agency behavior. It details how the FTC policed advertising with treatments of substantiation, the Chicago School debates, the problem of advertising to children, and the Reagan revolution. The second part of the book explains the FTC’s approach to privacy in different contexts (online privacy, security, financial, children’s, marketing, and international). One thesis of the book is that the FTC has adapted its decades of advertising law cases to the problem of privacy. There are advantages and disadvantages to the advertising law approach, but do understand that if you are a privacy lawyer, you are really an advertising law lawyer 🙂
FTCPL&P has been reviewed in the Journal of Economic Literature, the ABA Antitrust Source, the European Data Protection Law Review, World Competition, and the International Journal of Constitutional Law.
…the work of Hoofnagle stands out by offering both a welcome description of the applicable law and a broad contextual framework…Chris J. Hoofnagle takes over fifteen years of experience in American consumer protection, information, and privacy law and converts them into an absorbing, in-depth institutional analysis of the agency.
Overall, Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy is a fascinating read and a treasure trove of useful references for further research.
The full cite is: Bilyana Petkova, Book Review: Federal Trade Commission Privacy Law and Policy, 14(3) Int J Constitutional Law 781–783 (2016) doi:10.1093/icon/mow053
In LifeLock, the FTC alleged that the company “failed to establish and maintain a comprehensive information security program…” as required by a 2010 order. Lifelock settled the case for over $100M, despite the fact that the company claimed it had a clean bill of health from a reputable third party PCI assessor, and according to Commissioner Olhausen, LifeLock suffered no breach. Much of LifeLock was sealed, and so the case is a bit of a puzzle–how could it be the case that a company that receives a clean PCI-DSS assessment could also fail to establish a security program?
I hear we’re going to learn more specific details on the case soon, but in the meantime, the FTC just released to me LifeLock’s initial (2010) assessment. It contains a comical “public version” which is completely redacted and a largely unredacted “non-public” version.
More to come soon, but bear in mind that the FTC gave Wynhdam a kind of safe harbor if the company obtains a clean PCI assessment. If other respondents ask for similar treatment, these assessments are going to become more important than ever.
Few have shed as much light on data science than Cathy O’Neil. The former Barnard math professor, author of Doing Data Science, and hedge fund quant has now published Weapons of Math Destruction (Crown 2016).
Weapons of Math Destruction (WMDs) are perversions of data science that increasingly influence our lives. O’Neil shows how sloppy mathematical processes, designed for efficiency and lacking any consideration of fairness, are being used to sort people. Why is this a problem? WMDs are focused on the poor, while the rich get to rely on old-school methods reputation and decisionmaking—the letter of recommendation, the personal interview, and so on. Why are WMDs worse than ordinary human decisionmaking, with all of its foibles? O’Neil argues that WMDs lack feedback loops and that WMD users are much more concerned about doing things well enough rather than correctly. To demonstrate these points, O’Neil walks the reader through anecdotes including the scoring of teachers based on student exam performance, the pathologies that have arisen from U.S. News & World Report’s rankings of colleges, the online advertising that leads people to subprime loans and for-profit colleges, use of algorithms to sentence criminals, use of predictive policing to allocate cops on the beat, the use of information to set personalized insurance rates, and Facebook’s potential to influence our mood and votes.
Our livelihoods increasingly depend on our ability to make our case to machines
O’Neil points out time and again that people learn to game the algorithm. So, why isn’t that enough to solve the problems that O’Neil elucidates? The gaming creates perverse incentives and gross outcomes. Teachers help their students cheat in order to perform well on test-score-based algorithms; the honest who do not get fired. Colleges “hire” highly-cited professors on a part-time basis only to list them on their website in order to improve the school’s ranking.
In other cases, individuals cannot game the system and they suffer for it. Poor neighborhoods with nuisance crimes get more and more police attention, and in turn, more arrests, which feeds into other systems that predict that the poor are more likely to be recidivists. People who do not comparison shop are identified and charged more because companies can. And finally, we face the risk that Facebook will use its platform to shape how we view the world, to encourage us to vote or not, and so on.
O’Neil discusses baseball data science extensively, showing how the reams of information from games can be used to make interesting predictions and alter gameplay strategy. According to O’Neil, baseball analytics are fair for several reasons: the system does not use proxies (indirect measures of player skill) for performance and instead relies on direct evidence such as number of runs hit. The inputs of baseball analytics are transparent: anyone can see and record them. In addition, whatever machine learning that is occurring is relatively transparent as well. Finally, the system can incorporate lessons from its predictions and adjust accordingly.
How would one improve on O’Neil’s assessments of WMDs? I would suggest several additional factors. First, taking baseball as an example, there are no pathological incentive conflicts in such analyses. Baseball teams want their players to perform well. O’Neil’s book details just how conflicting incentives are in other contexts, such as when your bank analyzes your value as a customer. Second, to the extent there are conflicts (after all, the baseball teams are in competition), they are in rough equilibrium. Two well-resourced teams are competing, and we can assume that each can anticipate and react to the predictive power of the other. On the other hand, I am at a total disadvantage with respect to my bank’s analytics. In theory, competition among banks protects me, but in reality, transaction costs in switching, the erosion of fiduciary duty to customers, and so on makes our relationship with banks a form of competitive conflict.
Weapons of Math Destruction makes a good case that efficiency is not an unqualified good. O’Neil shows how increasingly, WMDs are used to create efficiencies for companies that come at the cost of our dignity and to the fairness of our society. She suggests several interventions that would deepen the responsibility the data scientist has for the data subject. But under my suggested framework above (incentive conflict and equilibrium), I think a needed solution is to bring back competition—radical competition. The challenge is to bring back this competition in a society that has bought into the platform, a society that can say with a straight face that ISPs are competitive, and that ignores the obvious transaction costs involved in putatively competitive markets. If we could get away from using a single company for search, email, social networking, online videos, ecommerce, advertising, a browser, an app market, and an operating system, there would be one less company that could so deeply evaluate us and control how we experience the world.
- Adverks sales: The industrial activity of advertising—onvertising, online advertising
- Recs, rectards, rectarded, recy: One of the most colorful and widely used descriptors in the book. Techs are sophisticated users, and then there’s recs, recreational ones. For instance, Principal describes the company’s new New York office as being filled with “Divisions requiring minimal intelligence. Minimal skill. Not techs but recs.” And Principal’s father as having subscribed to a “cruft of rectarded netservices whose chief goal was to keep their users within the walled garden by providing a sense of community, along with local news and weather, only so as like not to lose them to the wilds of the web…”
- Lusers: Loser users
- Plastiwicker: Those cheap plastic chairs formed to look like wicker
- Laptopped: Your probable current condition, dear reader
- Fannypackers: Wearers of fanny packs
- Acqhires: Workers “hired” through acquisition of their company
- Lotused: Something that Steve Jobs might do
- Comptrasting: To both compare and contrast
- Octalfortied: Forgotten
- Concentives: The name Cohen gave to a mystery shopper company. Seems perfect for a social media marketing company
- Crustaceate: A crabwalk. To index internet sites like a crab, compare with “spider” or “crawl”
- Glomars: Presumably a reference to the Glomar Explorer—a project so secret that one cannot disclose its existence or non-existence. We learn from the book that Tetration is spying on its users and perhaps framing them for crimes by suggesting content.
- Lynchrims: “…situations in which one human hangs lynched without clothes from a tree while another human stands just below and rims their anus.”
- Compocalypse: Computer related disaster
Useful words I learned from Book of Numbers
Bloomberg reports, FTC to Crack Down on Paid Celebrity Posts That Aren’t Clear Ads. Yes, the FTC is saber-rattling on this issue, with its native ads workshop, statements on the issue, and enforcement actions. And the media coverage runs into the same old arguments. First, “we didn’t intend to mislead.”
We’re venturing into a little bit of ridiculous territory with the FTC saying these things because influencers really want to follow the rules,” Pomponi said. “They want to do a good job — they want to be seen as useful to brands and don’t want to do anything that would jeopardize their relationships.”
That’s great and all, but as an advertiser, you hold the duty to ensure that your messaging is not misleading. You are in control of it. You draft it. You have to anticipate how a reasonable consumer right interpret it. FTCA liability does not require an intent to deceive. The issue is whether endorsements are likely to mislead, even if the deception was an unintentional mistake.
There’s a basic tension here. The point of endorsements, like native advertising, is to create a friendly engagement with the product. However, that friendly engagement may disarm the consumer. When the consumer recognizes material as advertising, it causes the consumer to more skeptically evaluate (or avoid) an advertising claim. Thus, the benefits of secret endorsement are in tension with the goal of enabling consumers to be self-reliant in recognizing commercial persuasion.
Second, there’s something new and different about influencers and ads:
Some advertisers say influencer posts don’t deserve such careful disclosure, because they are not the same thing as a traditional ad. Lauren Diamond Kushner, a partner at Kettle, a creative agency in New York, has worked on influencer campaigns with brands including Sunglass Hut. She said the Instagram stars and YouTubers often only work with the brands that they genuinely like and use.
Wrong! So, before the internet, there was this thing called TV. And on TV, there were celebrities who did ads. Those celebrities too screened products and only did endorsements that were not too embarrassing. (In many cases, real celebrities limit ads so that they only appear outside the US!). And before the TV, there was this thing called radio. And so on.
The “genuinely like and use” argument is baloney. What happens if the influencer changes her mind and stops using it? Do the tweets get deleted?