The Right to Erasure Is a Starting Point, Not a Solution

The right to erasure — the GDPR’s Article 17, the equivalent provisions in US state laws — is the privacy right that gets the most attention from individuals trying to manage their own digital presence. It’s easy to understand, it sounds powerful, and it produces a tangible result: you submit a request, you get a confirmation, your data is supposedly deleted.

The reality is more complicated. Erasure requests apply to the company you submit them to. They don’t apply to companies that have already received copies of your data, companies that acquired it before the request, or companies in jurisdictions where the right doesn’t apply. Data that has been shared downstream — sold, licensed, included in a data product — is not recalled by a deletion from the original source. The right to erasure is, in practice, a right to stop future data processing by a specific entity, with limited effect on existing distribution.

This isn’t an argument against using erasure requests. They’re worth submitting, especially to the major data aggregators that serve as upstream sources for others. But understanding what they do and don’t accomplish matters for setting realistic expectations.

The more durable protection is preventing data from being collected in the first place — through the kinds of practices that reduce the creation of linkable records rather than the deletion of records that already exist. This is harder, less legible, and doesn’t produce confirmation emails. It also works better over time. The deletion workflow is reactive; the minimization approach is structural. Both are worth doing, but the second one is doing more of the actual work.

The right to erasure is a useful tool in a larger toolkit. Treating it as the toolkit is where the disappointment comes from.

In Defense of Boring Security Habits

Most of the security advice aimed at individuals is either too basic to be useful or too technical to be actionable. The too-basic version — use strong passwords, don’t click suspicious links — has been repeated so many times that people have stopped hearing it. The too-technical version — run your own mail server, use Qubes OS, route all traffic through Tor — describes a threat model and a time investment that most people don’t have.

The useful version is somewhere in the middle, and it’s also the boring version. A password manager used consistently. Two-factor authentication on accounts that matter, using an authenticator app rather than SMS. A separate email address for account registrations that aren’t your primary inbox. Keeping software updated. A full-disk encryption passphrase on your laptop that you’d actually use under pressure. These things are not exciting and they don’t protect against nation-state adversaries, but they protect against the adversaries that most people actually face: credential stuffing from data breaches, phishing, physical device theft, and the casual account takeover that happens when someone uses the same password everywhere.

What makes these habits hard to maintain isn’t the technical complexity — a password manager is not technically demanding — it’s the friction at adoption time. The first week of using a password manager is annoying. After that it becomes invisible. The same is true of most of the useful boring habits. They have an upfront cost and then they run in the background.

I’ve written before about more technical approaches to privacy and security, and I think those are worth understanding. But the baseline boring habits are more valuable in aggregate than any number of more sophisticated measures that don’t get consistently applied. Consistency is the mechanism. The specific tools matter less than the decision to actually use them every time.

Browser Fingerprinting Has Made the Cookie Wars Irrelevant

The years of argument about third-party cookies — Google’s deprecation timeline, the privacy advocates, the ad industry’s resistance — have always had a slightly unreal quality to me, because the underlying tracking capability that cookies enabled was already being replicated by other means before the cookie conversation started.

Browser fingerprinting is the main one. Your browser, in the course of normal operation, exposes enough information to identify you with high reliability: the combination of your browser version, operating system, screen resolution, installed fonts, timezone, language settings, hardware capabilities reported through WebGL and Canvas, and dozens of other signals produces a fingerprint that is, in practice, unique for most users. Unlike a cookie, it requires no storage on your device and leaves no trace you can delete. You can’t opt out of it in any meaningful way using standard browser controls.

The techniques are well documented and have been for years. The Electronic Frontier Foundation’s Cover Your Tracks tool will show you your fingerprint and how unique it is. For most users on most browsers, the answer is: very unique.

The practical implication is that the privacy gains from blocking third-party cookies are real but limited. The tracking infrastructure that uses fingerprinting — and that uses first-party cookies set on third-party domains through redirect chains, and that uses device graph matching across logged-in services — doesn’t need third-party cookies to function. Deprecating them raises the cost of tracking slightly and changes which companies can do it effectively, but it doesn’t change the underlying dynamic.

This isn’t an argument for fatalism. Tor Browser and Firefox with the right configuration do meaningfully reduce fingerprinting surface. The point is that single-mechanism solutions to a multi-mechanism problem tend to shift the problem rather than solve it.

What Massachusetts Got Right on Consumer Privacy

Massachusetts has a better consumer privacy framework than most people who live here realize. The state’s data privacy law — not the most recent legislation, but the existing framework built up over the past fifteen years — includes some provisions that are genuinely stronger than the California law that tends to get all the coverage.

The most important is the data security requirement. Massachusetts 201 CMR 17.00, the Standards for the Protection of Personal Information of Residents of the Commonwealth, requires any company that holds personal information about Massachusetts residents to maintain a written information security program — a WISP — with specific required elements. The standard is not just a notification requirement; it’s a security practice requirement, and it applies to any business that holds this information regardless of where the business is located.

This is more meaningful than a right-to-know or right-to-delete provision, which are the features that privacy advocates usually lead with, because it governs the baseline handling of data rather than individual remediation after something has already gone wrong. A right to delete your data from a company that has already had a breach is less valuable than a requirement that the company not have the breach in the first place.

The gaps: enforcement is inconsistent, the AG’s office is resource-constrained, and the private right of action is limited. The law doesn’t cover the full range of data practices that modern consumers are actually exposed to. It was written for a different era of data collection.

But the underlying logic — that security is a prior obligation, not an afterthought — is the right logic, and it’s worth noting that Massachusetts arrived at it earlier and more clearly than most other states. The newer comprehensive privacy bills moving through the legislature are more ambitious; whether they’ll be better in practice will depend on enforcement, which is where privacy law in the US consistently falls short.

The Data Broker Problem Isn’t Going Away

Data brokers have been a known problem for long enough that the conversation about them has calcified into a predictable shape: journalists write the exposé, the company in question says something about industry practices and consumer choice, regulators express concern, nothing material changes. The cycle completes every eighteen months or so and we start again.

The reason nothing changes is not ignorance. The major data brokers — Acxiom, LexisNexis, Experian’s data business, the dozen smaller ones — are well understood by the people in a position to regulate them. The reason is structural. Data brokers serve industries that have significant lobbying power: insurance, financial services, background screening, direct marketing. The value proposition is clear and the customers are sophisticated. Regulatory pressure that would meaningfully constrain the data broker ecosystem would also constrain their customers, and those customers have resources to push back.

The opt-out regime that exists in most US states is not a solution. It’s a release valve. The process is deliberately difficult, the opt-outs don’t persist across company acquisitions, and the companies are not required to verify that the opt-out applies to all their downstream data products. A person who diligently opts out of every major data broker they can identify has reduced their exposure in ways that are meaningful at the margin but hasn’t actually removed themselves from the data economy.

What would actually change things is a data minimization requirement — a rule that says you can only collect and retain personal data that is necessary for a specific stated purpose, with enforcement that has real teeth. The EU’s GDPR has this in principle; the enforcement has been inconsistent, but the legal framework exists. The US doesn’t have an equivalent at the federal level, and the state-by-state patchwork isn’t a substitute.

In the meantime: the opt-out services are worth using, the major brokers are worth opting out of directly, and maintaining a clear-eyed view of what that does and doesn’t accomplish is the most honest starting point.