By: Ryan Jaeger
Historically, societal standards mandate asking permission before intruding and invading privacy. Strangers and friends alike knock before entering the sanctuary of our homes. We can choose whether we answer our phones, listen to voicemail messages, or return calls. We can unlock driveway gates only for those we know, trust or who approach by invitation.
These are long accepted as courteous privacy protocols. Yet, there is a marked disconnect between those societal protocols and what is currently practiced by many technology and e-commerce companies and applications regarding personal habits in our online spaces: exploiting data about those habits and actions. It’s a disconnect that most consumers appear to accept by the tacit agreements required to use these “free” applications and services. But are consumers aware of how their data is being used?
Enterprises argue that they provide adequate data use notifications, and that their customers have “opt-out options.” But these arguments leave several unanswered questions:
And finally, why should this customer data continue to be more vulnerable than consumer credit or health record data?
Leaving these questions unanswered—or adroitly avoiding or redirecting them—and not demanding enterprise accountability for customer data protection and use are most certainly antithetical to providing an optimum customer experience.
Perhaps consumers believe that security and privacy protocols are commonplace among technology companies and mobile app developers who “serve” them. Perhaps their seemingly resigned complicity in agreeing that their individual data can be collected stems from an assumption that most if not all companies act similarly to those that check credit reports: the data cannot legally be used for anything other than what the consumer has agreed to.
Whether consumers actually believe they have legal data protections, or are operating within a veil of complacency, nothing could be further from the truth when it comes to consumer data protections. Unlike federal laws protecting credit data, or Europe’s General Data Protection Regulation (GDPR), U.S. customers have only a patchwork of federal laws applying bandages to a festering privacy wound. A few states—namely California and New York (for financial services companies)—have taken the matter of data protection regulations into their own hands, but so far, the other 48 have not found enough legislative consensus to follow suit.
To be sure, consumer advocacy was not the primary driver in our country’s earliest days of “credit reporting.” Rather, companies were trying to make better lending decisions by attempting to filter out personal rumors and misinformation from consumers’ financial integrity, a process that did not entirely leave out highly personal consumer matters.
Privacy concerns greatly increased as credit reporting records became computerized—so much so, that in what would unknowingly become a 20th century foreshadowing event, Congress held hearings to learn about such concerns.
The difference, in what perhaps is a needle of big tech’s influence over the last half century, is that those hearings led to a change in federal law with the enactment of The Fair Credit Reporting Act (FCRA) in 1970, a piece of legislation that would evolve with increasing credit data concerns.
Congress sought to legislatively fix a significant problem in how consumer credit data is used. Why, then, is personal online customer data now so subordinate to credit data when it comes to protections?