Tuesday, October 24, 2023

The CCPA Should Scare You

 Introduction:

On November 3, 2020, Californians voted in favor of Proposition 24, which introduced amendments to the California Consumer Privacy Act (CCPA). These amendments are collectively known as the California Privacy Rights Act (CPRA) and went into effect on January 1, 2023. The CPRA expands and modifies the CCPA while establishing the California Privacy Protection Agency (CPPA) as the new administrator of the law. This post aims to provide an overview of the CCPA 2023 as amended by the CPRA.

A. Summary:

The CPRA amendment, which builds upon the existing CCPA framework, brings several significant changes. It expands the definition of a "business" by raising the threshold of consumers or households to 100,000 and includes "sharing" in addition to "selling" consumer information for reaching that threshold. Significantly, the CCPA (as amended by CRPA) no longer makes a distinction between “sharing” and “selling” consumer information. It is the same, and sharing personal information without permission may result in liability under the CCPA regardless of where you are located. You could even be located in North Carolina.

Donate through Venmo

Donate through PayPal

If you do business with a California consumer, you could fall under the purview of the CCPA.

Watch out for the “private right of action.”

The private right of action under the CCPA is expanded to include breaches of email addresses and passwords or security question answers that would allow access to a consumer's account. In other words, if you are a California resident, your email address cannot be taken, sold, or shared to a third party without your permission. Email addresses are deemed personal information worthy of protection, which makes a lot of sense given the value of the email addresses to businesses hoping to use information to make a sale or to sell the information itself.

The CPRA introduces new provisions, such as the inclusion of "sensitive personal information," an extended look-back period for consumer information requests, expanded consumer rights, and non-discrimination protections.

 B. Pleading Questions:

For lawyers, the question arises: how do you plead a CCPA claim under the CPRA amendment? When filing a complaint under the CCPA 2023, it is still appropriate to plead the CCPA itself, as the CPRA amendment is integrated into the CCPA framework. While it may be prudent to include a footnote acknowledging the CPRA amendment, it is not necessary. However, it is essential to update the definition of a "business" in complaints to reflect the new threshold of 100,000 consumers or households, and to recognize that both "selling" and "sharing" of consumer information count towards that threshold.

C. Main Points for Private Right of Action:

The CCPA is likely to be the model for comprehensive cybersecurity measures under federal law if Congress ever takes this kind of action. Below is a list of things to keep in mind when pleading a claim under the CCPA as amended by the CPRA.

  1.  Pleading the CCPA: Complaints should continue to plead the CCPA, but the definition of a "business" must be updated to include the new threshold of 100,000 consumers or households, and "sharing" should be considered alongside "selling" for reaching that threshold.

  2. Expanded Private Right of Action: The CPRA expands the private right of action to include breaches of email addresses in combination with passwords or security question answers, eliminating the need to tie such breaches to a specific category of "personal information" to trigger the notice provision and protections of the CCPA.

  3. Introduction of "Sensitive Personal Information": The CPRA adds "sensitive personal information" to the existing category of "personal information." This includes biometric information for unique identification, personal health information, and personal information related to a consumer's sex life or sexual orientation.

  4. Extended Look-Back Period: The CPRA allows consumers to request information beyond the usual 12-month period, if it is not impossible or disproportionately burdensome for businesses to provide. This expands the scope of consumer information requests.

  5. Expanded Consumer Rights: The CPRA grants consumers the right to correct inaccurate information, opt out of sharing their personal information, and limit the use of their information. It also introduces non-discrimination protections to safeguard employees, job applicants, and independent contractors who exercise their rights under the CCPA.

    Leave a comment

D. Main Points "Fleshed Out":

Below are a few more things to keep in mind: 

  1. Use of CCPA after January 1, 2023: The CCPA is still used even after the CPRA amendment.

  2. The CPRA modified and expanded the CCPA without creating a separate law. Therefore, when referring to the law, it remains the CCPA.

  3. Impact on the Definition of a "Business": The definition of a "business" 

The CPRA amendment to the CCPA introduces several changes that impact how businesses are defined and regulated under the privacy law. These changes broaden the scope and obligations for businesses in handling consumer data. Let's delve into the key modifications brought about by the CPRA.

a. Redefinition of "Business":

Under the original CCPA, a business was defined as an entity that met one or more of the following criteria: (A) had an annual gross revenue over $25 million, (B) possessed personal information of 50,000 or more consumers, households, or devices, or (C) derived 50% or more of its annual revenue from selling consumers' personal information.

The CPRA amends the definition of a "business" by increasing the threshold for the number of consumers or households from 50,000 to 100,000. This means that a business must now meet the requirement of buying, selling, or sharing the personal information of 100,000 or more consumers or households to be subject to the CPRA.

Moreover, the CPRA expands the definition of "business" to explicitly include the sharing of consumer information, treating sharing in the same way as selling. This change expands the reach of the CCPA, ensuring that businesses engaged in sharing consumer data on a large scale are also subject to the law's provisions.

b. Strengthened Private Right of Action:

The CPRA amendment enhances the private right of action provision under the CCPA. Previously, to trigger the notice provision and protections of the CCPA, a breach had to involve a category of "personal information." However, under the CPRA, a breach of an email address in combination with a password or security question and answer that would permit access to the consumer's account is sufficient to trigger the CCPA's provisions. This expansion removes the requirement of a breach involving personal information, thereby broadening the circumstances in which consumers can enforce their rights under the law.

c. Addition of "Sensitive Personal Information":

The CPRA introduces a new category called "sensitive personal information" under the CCPA. This includes all the elements of the previous definition of personal information but adds specific types of data, such as the processing of biometric information for unique identification, personal information related to a consumer's health, and personal information related to a consumer's sex life or sexual orientation. The inclusion of sensitive personal information highlights the heightened privacy concerns associated with these types of data and imposes additional obligations on businesses in handling and protecting them.

d. Extended Look-Back Period:

The CPRA expands the "look-back period" for consumer data requests beyond the previous 12-month period under the CCPA. Consumers now have the right to request information that goes beyond the 12-month period, if providing such information is not impossible or disproportionately burdensome for the business. This change enables consumers to access a broader range of historical data held by businesses, enhancing transparency and empowering individuals to exercise their privacy rights.

Conclusion:

The CPRA's amendments to the CCPA significantly impact how businesses are defined and regulated under the privacy law. With changes to the threshold for businesses, the strengthened private right of action, the introduction of sensitive personal information, and the extended look-back period, the CPRA broadens the scope and obligations for businesses in protecting consumer privacy. These changes reflect the evolving landscape of privacy rights and emphasize the need for businesses to adapt their practices to comply with the new requirements imposed by the CPRA.

Feel free to contact me at michaelwuva78@gmail.com. You can leave a comment, too.

I also publish a legal technology blog NC Legal Technology, a New Frontier

Your Personal Information Will be Stolen But...When, by Whom, and How Many Times?

The most dangerous Luddites are those, who don’t care about your privacy. Many of them wear black robes, and, despite their childlike understanding of technology, they make decisions that negatively impact all of us.

But…it gets worse, my friends.

Hackers have already stolen your most intimate information. Some of us know this, and some of us do not. Skeptics argue people willingly give away their data, but this eschews the meaning of “consent.” We all know Americas have trouble understanding the concept of consent—especially men.

People are forced to supply their names, Social Security numbers, dates of birth, street address, and other personal information to obtain a good credit score, which will allow them to purchase a house, a car, get a job, rent an apartment, purchase insurance, and live in America. There are countless other instances where people have very little choice but to share their data. Modern life forces people to share their individual data, which leaves everyone in America vulnerable—even children.

“Data breaches at as Target, Yahoo!, Home Depot, HSBC, and many other breaches, which are far too numerous to list  demonstrate that American law is ill equipped to combat these incidents and the growing threat to our personal data and to our privacy in general.[1] Despite our supposed innovation in software, the security of the cloud, and other advancements, data breaches keep occurring, and there is no sign they will stop or even abate.

When it comes to data privacy, the law is not equipped to handle or is indifferent to the present and ever-expanding problem of the security of individual personal data. It should not be so difficult for organizations to protect personal information. Although laws in the United States need to be more comprehensive and cohesive, that is not the topic discussed in this paper. The United States does have some very good legislation, but the problem is the courts often will not hold organizations accountable as federal judges choose to surround the law in an impenetrable miasma.

Image

Generally speaking, do you think federal judges care about or even understand what happens when your personal information (or data) are stolen?

Think about it this way: do you think a snake cares about what happens to a mouse it ate?

If you don’t believe me, then read below…I’m just using their words.

Donate through Venmo

Donate through PayPal

The Clapper case cemented the federal courts’ refusal to recognize data breach harms.[2] In Clapper, plaintiffs challenged the constitutionality of a provision of the Foreign Intelligence Surveillance Act (FISA) where plaintiffs believed the government was illegally spying on their communications with foreign individuals (the government suspected these individuals were terrorists), which caused them to incur great expense to travel to foreign countries to keep these communications confidential.[3] The Supreme Court held the plaintiffs lacked standing because they could not show the actual injury of the surveillance was occurring or “certainly impending.”[4]Specifically, Justice Alito, writing for the Court, held, “because they cannot demonstrate that the future injury they purportedly fear is certainly impending and because they cannot manufacture standing by incurring costs in anticipation of non-imminent harm.”[5] Incidentally, Edward Snowden subsequently uncovered the government was spying on the plaintiffs.[6] Courts have relied on Clapper, which does not provide for recovery for fear, anxiety, worry, stress, time spent guarding against identity theft in the future all of which it refers to as “speculative” no matter how probably future harm is. [7]

In 2016, Spokeo followed Clapper and was supposed to resolve ambiguities in the case.[8] The Spokeo case, however, provided a frightening example of how incorrect information about an individual posted online can be very damaging and difficult to remedy. [9]

Plaintiff Robbins (Respondent in the Supreme Court case) alleged the dossier of information listed by Spokeo on the internet suggesting he was overqualified and may be unwilling to move because of a nonexistent family, all of which hurt his chances of securing a job.[10]  The Court was unmoved and disregarded the violation of the Federal Credit Reporting Act (FCRA); the Court held the violation of the FCRA (minor or “bare procedural violation”) did not injury him sufficiently enough to confer standing, nor were his injuries “concrete.”[11]

Under Clapper and Spokeo, worry, anxiety, fear, or time spent guarding against future identity theft are not considered harm no matter how probable the possibility of future identity theft.

TransUnion[12] is a data privacy case that dealt with reputational damage that ultimately had economic impact. Although the history of the case is long and the factual situation complex and varied, the basic facts involved a watch list created by the U.S. Treasury’s Office of Foreign Assets Control (OFAC). The so-called “OFAC watchlist” was meant to identify terrorists, drug traffickers, and other serious criminals. If a consumer’s first and last names matched the first and lasts names of a person on this list, then that person was listed as a “potential match” and put on the OFAC watch list.

8,185 people whose names were on the list filed a lawsuit in federal district court in California saying TransUnion 1) failed to ensure accuracy of their credit files, and 2) sent defective notices to the consumers both of which clearly violated the Fair Credit Reporting Act (FCRA)[13]. The case eventually went to trial, and the jury was so upset that it awarded punitive damages, driving the verdict to $60 million, a verdict that was later reduced to $40 million.[14]

TransUnion appealed the case to the Supreme Court. The Supreme Court essentially canceled the jury verdict (vacated it), and it said only those people, 1,853 in total, who had the credit reports with the inaccurate OFAC watch list sent to third-parties (banks, car dealers, etc.), had suffered any “actual harm.”[15]

The Court dismissed the case as to 6,332 people whose credit reports contained misleading alerts, yet had not been sent to third-parties, reasoning the “mere presence of an inaccuracy in an internal credit file, if it was not disclosed to third party, caused no concrete harm.”[16] The Court concluded these 6,332 people had no “standing,” which meant they had not been harmed enough to allow the Court to hear the case. Basically, the Court did not think having your name incorrectly placed on a terrorist watch-list without your knowing meant you were even harmed.[17]

The ramifications are wide and are alarming for a few reasons that affect everyone: First, Congress passed the FCRA to remedy these kinds of situations, and it was clearly violated, yet the Supreme Court explicitly said the Court had the right to overrule Congress and take away its law-making ability. Essentially, the Court “legislated from the bench” and made no bones about its ability to continue to do that if it did not like the law(s) Congress passed. Second, the Court referred to this violation of a privacy statute (conceptually like those in data breach cases) as a “mere procedural violation,” which is the language it has used to describe violations of consumer privacy the Court does not deem to be serious.

Federal courts continue to focus on financial harm while discounting reputational harm, anxiety, worry, fear, time spent remedying such a situation, and other what are called “intangible harms,” which are largely unrecognized in American law these days.  Third, it shows that you must wait for something terrible to happen to you financially to be considered “harmed,” and the law cannot be used to prevent harm.

Clapper, Spokeo, and TransUnion all have profound implications for our personal information, and they are cited in virtually every case that deals with data privacy issues, yet these cases and their progeny, for the most part work from the implicit (if not explicit) assumption that data breach cases are so new, so novel, that there is no way the federal courts can even consider these claims absent some direct evidence financial harm or physical harm.

Because Clapper and Spokeo, and others that followed, the standard is still to show “concrete” harm, and, according to many judges, virtually the only way is to show monetary loss. Having your data stolen is not enough harm, if any, to be considered “actual harm” even if common sense and historical precedent indicate otherwise. The rationale behind this notion is data breach cases somehow lack historical precedent, which ignores history because recognizing the kind of harm asserted by plaintiffs in data breach cases is entirely consistent with precedent.

 Data breach harms are consistent with precedent because the historical precedent found in the common law does not require you to assign a monetary value to emotional harm. Injured parties have always been able to recover for emotional injuries such as emotional distress and other forms of suffering, and the denying the ability of injured parties to recover for emotional harm, emotional distress not only ignores and disregards not only what the law always has been, but what it should be now.[18]

Moreover, the narrow conception of data breach harm as set forth in Clapper, Spokeo, and TransUnion largely ignores precedent that is conceptually similar and can easily support data breach cases surviving dismissal.

For example, “The tort of assault—where the harm is the emotion of fear—dates back six and a half centuries.”[19] Historically, the tort of assault was based on fear of imminent physical harm, and it required no any showing of physical injury or any other kind of harm in and of itself.[20] The law has recognized and does recognize still the tort of alienation of affection (having sex with someone else’s spouse), which permitted recovery for emotional distress.[21] Noises, odors, dust, smoke, and excessive vibration has been recognized as harm, and property evolved to include intangible and tangible harms to property.[22]

The tort of defamation protected reputation without showing proof of financial or physical suffering. [23] In fact, Solove & Citron take us through the very development of the right to privacy in American law, which dates to 1890 when William Brandeis, writing with famed lawyer and Harvard Law School Professor, Samuel D. Warren, referred to an “injury of the feelings.”[24]

As Warren & Brandeis famously said, “mental pain and distress, far greater than could be inflicted by mere bodily injury” when referring to privacy invasions and how it makes a person feel about themselves .[25] They also state people should be able to decide the extent to which her personal information would be revealed, shared, and disclosed to others.[26]  

Donate through Venmo

Donate through PayPal

In essence, Solove & Citron along with Calo argue that Clapper and Spokeo and the other leading cases of the day should recognize that injured parties can recover in data breach cases for emotional harm resulting from the breach such as fear, anxiety, and worry. For example, the law recognizes (and has for hundreds of years) emotional harm coming from a breach of privacy such as assault, defamation, alienation of affection, intentional infliction of emotional distress, negligent infliction of emotional distress, which are just like he very injuries (fear, anxiety, worry, and lost time) that occur in these cases and that are largely ignored, explained away, or vastly discounted, by judges who dismiss data breach cases.[27] 

 Solove’s & Citon’s views are supported by industry experts.  Private information can take the form of so-called personally identifiable information (PII) or protected health information (PHI).[28] PHI can be more than ten times more valuable than credit card information because it contains highly sensitive information, such as Social Security numbers, dates of birth, addresses, credit card numbers, and medical conditions.[29] It is referred to as “the crime that kills” because cyberthieves can use it to file false claims, often altering your health information in your medical records, which can have lethal effects.[30]

How valuable is your PII and PHI? Surely, little old me can’t have information that’s worth much money?

THINK AGAIN!

The high value of PHI is why cybercriminals attempt one hack every thirty-nine seconds;  in fact, the healthcare sector reported the second largest number of data breaches among all measured sectors in 2018, and it had the highest rate of exposure per breach.[31] It does not end there because one report found the average cost to resolve a healthcare identity theft-related incident was $20,000, and 50% of victims lost their healthcare coverage as a result due to spikes in premiums they could not afford or the inability to resolve fraudulent charges, while 30% said their insurance premiums went up after the event.[32] Unfortunately,  40% of the customers were never able to revolve their identity theft at all.[33]

Image

The same kind of risk exists when PII is exposed. For instance, consider something as banal as a driver’s license.  Even according to a rival credit bureau of TransUnion, Experian:

A driver's license is an identity thief's paradise. With that one card, someone knows your birth date, address, and even your height, eye color, and signature. If someone gets your driver's license number, it is also concerning because it's connected to your vehicle registration and insurance policies, as well as records on file with the Department of Motor Vehicles, place of employment (that keep a copy of your driver's license on file), doctor's office, government agencies, and other entities. Having access to that one number can provide an identity thief with several pieces of information they want to know about you. Next to your Social Security number, your driver's license number is one of the most valuable pieces of identifying information to keep from thieves.”[34]

American law requires an individual must have “standing to sue” to survive dismissal at the earliest stage possible.[35] Simply put, this means in order to maintain a case in federal court, you must have evidence you have been injured (usually a financial injury), and if you do not have this kind of injury, then your case will be dismissed (thrown out) of court. A great deal of research has gone into focusing on what the seminal cases such as Clapper, Spokeo, and TransUnion say and what those cases mean. Of course, those cases must always be kept in mind when used in any analysis of federal data privacy cases. Most—if not all—cases that are dismissed for lack of standing are going to be because there is no “concrete” evidence of harm, which means no evidence of financial harm or “fraud” generally. However, little research has considered what, if any, patterns exist in most, if not all, federal data breach cases dismissed for lack of standing.

In short, most data breach cases are dismissed because a judge finds no clear evidence of financial harm. Sadly, your mental anguish, stress, reputational damage (unless clearly quantifiable as if that’s possible), time spent remedying the situation, are not recognized as “actual harm” under American law.

How do you handle this?

I can’t give you an answer other than to say: be careful. Think before you click. Say no to cookies tracking your movements on websites, and use websites that are more secure (as if that’s possible!).

Thanks again for reading.

Feel free to email me at michaelwuva78@gmail.com with questions.

Best regards,

Michael Wells, Esq., MLS


[1] See Jon L. Mills & Kelsey Harclerode, Privacy, Mass Intrusion, and the Modern Data Breach, 69 FLA. L. REV. 771 (2017).

[2] See Clapper v. Amnesty International USA, 568 U.S. 398 (2013).

[3] Clapper, 568 at 401.

[4] Id. at 421-2. Daniel J. Solove and Daniel Keats Citron provide an incredible discussion of Clapper and other pertinent cases in the data privacy realm, and much of the discussion in this section is aided by their insightful analysis. See, Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data-Breach Harms, 96 TEX. L. REV. 737, 741-742, (2018).

[5] Id.

[6] See Solove & Keats, Risk and Anxiety, supra note 16, at 741-42.

[7] Id.

[8] Spokeo v. Robins, 578 U.S. 330 (2016).

[9] See generally, Spokeo v. Robins, 578 U.S. 330 (2016).

[10] Spokeo, 578 U.S. at 353-54 (Ginsberg, J. dissenting); see also, Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety, supra note 16, at 743-45.

[11] Id. at 342-43 (majority opinion).

[12] See generally, TransUnion LLC v. Ramirez, 141 S.Ct. 2190 (2021).

[13] TransUnion, LLC, 141 S.Ct. at 2216.

[14] Id., 141 S.Ct. at 2204.

[15] Id., 141 S.Ct. at 2212.

[16] Id., 141 S.Ct. at 2210.

[17] Id.

[18] Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data-Breach Harms, 96 TEX. L. REV. 737, 741-742, (2018).

[19] Ryan Calo, Privacy Harm Exceptionalism, 12 COLO. TECH. L.J. 361, 363 (2014). For a further discussion on the history of the legal foundation for recognizing anxiety as harm, see Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety fn. 20, supra, at 741-742.

[20] Solove & Citron, supra note 14, at 767.

[21] Id. at 768.

[22] Id.

[23] Id.

[24] Id.; see Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 193 (1890).  

[25] Quoting Id. at 196-197. This passage and the evolution of privacy law is discussed poignantly in Solove & Citron, supra note 16, at 768. 

[26] Id.

[27] See supra note 21.

[28] Personally identifiable information (“PII”) generally refers to information that alone or in conjunction with other information identifies an individual, including an individual’s contact information (including postal addresses, email addresses, and phone numbers), Social Security number (SSNs), date of birth, driver’s license number or government-issued identification number, financial account numbers. See generally 2 C.F.R. § 200.79. Personal health information (“PHI”) is a category of information that relates to an individual’s physical or mental health and the provision of health care. Among other things, as used in this complaint PHI includes medical information as that term is defined in C.F. R. §160.103.

[29] The Value of Personal Medical Information: Protecting Against Data Breaches, https://www.naham.org/page/ConnectionsThe-Value-of-Personal-Medical-Information# (last visited Apr 16, 2023).

[30] Id.

[31] Identity Theft Resource Center, 2018 End -of-Year Data Breach Report, available at: https://www.idtheftcenter.org/2018-end-of-year-data-breach-report/ (last accessed Mar. 26, 2023).

[32] Id.

[33] Elinor Mills, Study: Medical identity theft is costly for victims, CNET (March 3, 2010), available at: https://www.cnet.com/news/study-medical-identity-theft-is-costly-for-victims/ (last visited Apr. 11, 2023).

[34] Anthem Hack: Personal Data Stolen Sells for 10x Price of Stolen Credit Card Numbers, IT World, Tim Greene, Feb. 6, 2015, available at: http://www.itworld.com/article/2880960/anthem-hack-personal-data-stolen-sells-for-10x-price-of-stolen-credit-card-numbers.html (last visited on Apr.7, 2023).

[35] Standing, LII / Legal Information Institute, https://www.law.cornell.edu/wex/standing (last visited Apr 1, 2023).

Friday, September 15, 2023

Federal Judges and Congress Don't Care About Your Privacy



Privacy matters to hundreds of millions of people in the United States, not just lawyers, judges, social scientists, policy makers, and companies who fear being sued over privacy violations. The right to privacy is a fundamental right recognized as existing under the Fourteenth Amendment of the United States Constitution.[1] When a right is deemed fundamental under the Constitution, it means it is so vital to human existence that life cannot be lived in any fulfilling and meaningful way without it.[2] Although the Constitution generally addresses freedoms as they pertain to an individual against the government, privacy is a much broader concept and right, one that is in grave danger. In America, both the government and big business are eroding the right to privacy.[3]

In 1890, two American lawyers, Samuel Warren and Louis Brandeis (a future United States Supreme Court Justice) struggled with and worried over emerging technologies such as cameras and gossip rags, both of which served to erode privacy in America.[4] Warren and Brandeis both realized, “mental pain and distress, far greater than could be inflicted by mere bodily injury.”[5] Despite the century and several decades gap that exists between us, their struggle remains our struggle.

Your Data Is Being Sold Without Your Consent, Which Exposes You to Identity Theft 

Currently, the government, companies, data brokers, and health providers are all collecting our personal, medical, and financial information, the most intimate details of our lives; sometimes we know about it, and sometimes—often—we do not.[6] This data is the most valuable commodity on earth, and it is being collected, packaged, and sold, often without our consent or knowing.[7]

Protecting the right to privacy means protecting this personal data, our personal data, which is comprised of the most intimate details of our lives such as our sexual preference, our sex lives, gender identity, medical history, thoughts, doubts, needs, desires, preferences, things we only tell our loved ones (and that they tell us), our emails, texts, online chats, browsing history, purchasing history, nanny cam videos, and private details about our young children. In short, privacy is a “precondition to a life of meaning.”[8] Imagine that information is vulnerable, that we are vulnerable, that our children are vulnerable. Because it is vulnerable. We are all vulnerable, which means we are unsafe unless we can somehow be protected.

When a data breach occurs, our private information, intimate data about what creates our identity and what makes us human, it is stolen by hackers from, for example, online retailers such as Target, Walmart, Amazon, LinkedIn, Equifax, TransUnion, and others, and more than likely used to steal our identity and do us personal, financial, and psychological harm.[9]

With privacy the risk is always maintaining it, and that privacy is destroyed when a data breach occurs. If used maliciously (it usually is), personal information contained in that stolen data can destroy reputations, expose private medical conditions, endanger people physically, and ruin people financially. It affects everyone, and there is no way to escape it, which is why it is so scary and so important.

You Will Be a Victim Of a Data Breach 

Unfortunately, the risk of exposure of private data of consumers is surprisingly high in the United States because 48% of consumers have been victims of a data breach, compared to the rest of the world where 33% of consumers have been victims of data breach.[10] In the United States alone that is well over a hundred million people. Worldwide 33% is several billion people.

Furthermore, 47% of consumers in the United States, who are victims of a data breach, may not know it.[11] Data breaches have disastrous consequences for companies and for people, and these studies and facts provide a prism through which to consider the legal aspect of the data breach.

The question this raises is: what, if anything, can the law do to protect personal data from being breached, stolen by hackers, and used to destroy our credit, steal from us, and ruin our reputations?

Not much, argues renowned University of Virginia School of Law Professor Danielle Keats Citron:

“Overly narrow harm requirements are the source of the problem. Courts dismiss claims because plaintiffs have not suffered tangible injuries like economic or physical damage. Federal courts use the same rationale to deny the same plaintiffs standing. Privacy harms are often intangible—emotional distress, anxiety, thwarted, expectations and other,” denials, loss of trust, the chilling of expression, damaged reputations, and discrimination. These injuries are real.”[12]

It is particularly important to view it from the legal perspective because, in theory, the law should make data breaches less likely by holding the organizations that control and fail to secure the data accountable as well as punish criminals who steal the data. Without legal accountability, there can be no deterrence, which means organizations will not be motivated or forced by the power of the law to protect individuals’ privacy.

Laws should require and incentivize businesses, medical providers, and educational institutions, and government entities all of whom store and maintain data on individuals to make that data more secure. Likewise, there should be harsh financial penalties for organizations that do not keep data safe. It cannot be assumed that organizations will make data safe, and the law must hold organizations accountable.

It is anathema to think the law might make it easier for corporations to avoid responsibility for protecting the personal data they gather from individuals and profit from financially. Unfortunately, this is precisely what is happening because of barriers created by laws, the lack of laws, and federal judges not holding those organizations accountable in court.

Lawsuits are a way of holding organizations accountable when they break the law, and slamming the courthouse door the in face of individuals trying to seek remedies for harm they have suffered is fundamentally unfair, especially when corporations do not receive the same level of scrutiny in the courts as citizens. That kind of thinking is antithetical to justice because it makes private information less safe, yet it has invaded the minds of many federal judges. It is extremely difficult for a victim or victims of a data breach to file a lawsuit in federal court without having it dismissed because the court finds no “concrete” injury that is causally linked to that injury. This is referred to as “actual harm," which is generally financial harm. And it is all necessary to establish standing, which is the standard for legal injury necessary to maintain a federal lawsuit. That harm may not be apparent or even discoverable, but the law does not allow individuals to recover for worry, anxiety, fear, and time spent to remedy the situation, nor does it provide for recovery for future harm, no matter how likely that future harm.

[1] U.S. Const. amend. XIV, § 1.

[2] Fundamental Right, LII / Legal Information Institute, https://www.law.cornell.edu/wex/fundamental_right (last visited Apr 1, 2023).

[3] Adam D. Moore, Privacy, Speech, and Values: What We Have No Business Knowing, SSRN Journal (2015).

[4] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 193 (1890). Professors Daniel J. Solove and Danielle Keats Citron are the foremost experts on the right to privacy and data privacy in particular in the United States, and I found their article Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data-Breach Harms, 96 TEX. L. REV. 737, 741-742, fn. 22 (2018) most helpful.

[5] Id.

[6] Chris Kirkham, Jeffrey Dastin & Jeffrey Dastin, A look at the intimate details Amazon knows about us, Reuters, Nov. 19, 2021, https://www.reuters.com/technology/look-intimate-details-amazon-knows-about-us-2021-11-19/ (last visited Apr 15, 2023).

[7] Data: The Most Valuable Commodity for Businesses, KDnuggets, https://www.kdnuggets.com/data-the-most-valuable-commodity-for-businesses.html (last visited Apr 15, 2023).

[8] Danielle Keats Citron, The fight for privacy: protecting dignity, identity, and love in the digital age, Introduction, xii (First edition ed. 2022).

[9] Daniel J. Marcus, The Data Breach Dilemma: Proactive Solutions for Protecting Consumers' Personal Information, 68 DUKE L.J. 555 (2018).

[10] Report: 33% of global consumers are data breach victims via hacked company-held personal data, VentureBeat (2022),https://venturebeat.com/security/report-33-global-consumers-data-breach-victims-hacked-company-held-personal-data/ (last visited Apr. 6, 2023).

[11] Data breaches: Most victims unaware when shown evidence of multiple compromised accounts, University of Michigan News (2021), https://news.umich.edu/data-breaches-most-victims-unaware-when-shown-evidence-of-multiple-compromised-accounts/ (last visited Apr. 6, 2023)

[12] Danielle Keats Citron, The fight for privacy: protecting dignity, identity, and love in the digital age (First edition ed. 2022).

Monday, August 21, 2023

Why are big banks so bad at data privacy?

Banks typically stink at protecting data, and they know better. Why is this? The main reason: GREED. They are cheap when it comes to cybersecurity because it allows them to pay their executives more. Oh...they like to blame lawyers and regulators, but that's all talk. 

The banks "doth protest too much." 

Their bellyaching is meant to blind you to what they do, which is almost nothing. Banks are supposed to be secure. They aren't and likely won't be anytime soon. 

What that saying about _____ flowing down hill? You are at the bottom of that hill.

 Here are some ways banks screw up (almost all the time): 



1. Weak Cybersecurity Measures: Insufficient cybersecurity measures, such as outdated software or lack of regular security audits, can make banks vulnerable to hacking and data breaches. Just look at Wells Fargo, which CNN reports "has been plagued by scandal." Consumer deposits just disappeared back in March. This should come as no surprise since Wells Fargo settled a lawsuit for $3 billion back in 2020 over fake accounts. One of the funniest and most absurd things I ever heard came from a lawyer who used to work for a large bank (it wasn't Wells Fargo), and he claimed the bank he worked for them cared about the people whose bank held the mortgage. Fortunately I wasn't near the guy because I spit out my coffee in shock over this absurd statement. Big. Banks. Do. Not. Care. They don't care. They have never cared. And they never will care. 

2. Inadequate Data Encryption: If sensitive personal data is not properly encrypted, it can be easily accessed by unauthorized individuals during data transmission or storage.

3. Improper Data Handling: Banks might mishandle data by sharing it with third parties without consent or keeping it longer than necessary, increasing the risk of unauthorized access. In other words, they sell your data to third-parties, some of whom are not exactly above board.

4. Weak Authentication: Banks sometimes use weak authentication methods, like simple passwords or outdated security questions, making it easier for attackers to gain unauthorized access to accounts.

5. Lack of Employee Training: Without proper training, bank employees might inadvertently mishandle data, fall victim to social engineering attacks, or fail to recognize suspicious activities. People are the biggest problem with 91% of cybersecurity incidents coming from human error



6. Insufficient Access Controls: Poor access controls can allow unauthorized personnel to access sensitive customer information, increasing the risk of data breaches.

7. Inadequate Incident Response Plans: Without a robust plan in place, banks might struggle to respond effectively to data breaches, leading to prolonged exposure of sensitive information.

8. Ignoring Regulatory Compliance: Failure to comply with data protection regulations like GDPR or HIPAA can result in legal consequences and damage to the bank's reputation.

9. Overlooking Physical Security: Focusing solely on digital security while neglecting physical security measures can expose sensitive data to theft or unauthorized access.

10. Vendor Management Issues: Banks that work with third-party vendors must ensure these partners also adhere to stringent data protection practices, as vendor breaches can impact the bank's customers.

To mitigate these mistakes, banks need to invest in robust cybersecurity measures, implement strong encryption protocols, train employees on data privacy, regularly update their systems, and establish effective incident response plans. Additionally, staying informed about evolving cybersecurity threats and compliance requirements is essential to maintaining the security and trust of their customers. Of course, this would require caring about their customers, which they may not always do. In fact, I suspect they rarely care. If they did, they would protect their customers.

Thursday, August 17, 2023

Why is AI a threat to lawyers?

The statements below are true, but I also think AI helps lawyers more than it hurts them. And we are already using it and have been for a long time. Of course people--lawyers especially--are afraid of things we do not understand. However, that is no reason to ignore change. We need to embrace change, or we will become obsolete; some of us already are. Of course, AI is no panacea, but it cannot be ignored. Ignoring AI is perilous, but relying on it is perilous, too,  and maybe even more dangerous than not using it at all. 




AI is considered a potential threat to lawyers in several ways:

1. Automation of repetitive tasks: AI can automate various repetitive tasks that lawyers typically handle, such as document review, contract analysis, legal research, and due diligence. This can significantly reduce the time and effort required for these tasks, potentially leading to a decrease in demand for junior lawyers or paralegals. Another upshot is you won't have to read through dense blocks of 8 point font afraid of missing an italicized period. 

Likewise, you won't have to focus sooooooo much time on formatting and dealing with lawyers who fixate on this (it is important but not that important). The best example here is in class action litigation, which is heavily reliant upon forms where the questions are standardized as well as their formatting. I recall lawyers complaining about formatting all the time. It matters some, but not for a freaking diagnostic tool. And you shouldn't have to explain your notes to some glorified hall monitor. That's silly. Why not just have the AI format it? 

In my opinion, the time would have been better spent focusing on the questions asked to the potential claimant. Some of the questions--the ones I didn't draft actually--were terrible, compound, objectionable questions. If you tried to ask them in a deposition, you'd never get them out without an objection. And the objection would be warranted. You'd also have have a very confused witness. I honestly can't begin to tell you how terrible these forms were. I came to the conclusion that the people drafting them knew nothing--absolutely nothing--about taking or defending depositions. So, if their job is to construct and format these forms, then automation will take their jobs as it should. They need to focus on the questions they ask and (I realize this is a novel concept) LISTENING TO THE CLIENT. The one lawyer I'm thinking about maybe took one deposition in two years. The one he took was an absolute disaster, or so I gathered as I wasn't there. 

2. Legal research and analysis: AI-powered tools can quickly analyze vast amounts of legal data, including case law, statutes, and regulations, to provide insights and recommendations. This can help lawyers in their research and analysis work, but it also means that AI systems can potentially perform these tasks more efficiently and accurately than humans, posing a challenge to lawyers' expertise and value. Wouldn’t it be nice to have a second set of eyes reviewing your work? 

3. Contract review and drafting: AI can review and analyze contracts, identifying potential risks, inconsistencies, or missing clauses. It can also generate draft contracts based on predefined templates and specific requirements. This threatens the traditional role of lawyers in contract review and drafting, as AI systems can perform these tasks faster and with fewer errors. This would be great for drafting complaints or other pleadings that all essentially say the same thing with the only differences being the names. 

4. Cost reduction and access to legal services: AI-powered legal services, such as chatbots or virtual assistants, can provide basic legal advice and guidance to individuals at a lower cost compared to hiring a lawyer. This may make legal services more accessible to a broader population, but it also means that lawyers may face competition from AI systems in providing routine legal advice. I think you can cut out a ton of paralegal time with these as primary screeners. If there's something there, then a human can do it. 

5. Ethical and privacy concerns: The use of AI in legal practice raises ethical and privacy concerns. For example, AI algorithms may have biases or lack transparency, potentially leading to unfair outcomes or decisions. Arguably AI is racist as it utilizes data steeped in racial stereotypes. Lawyers need to be aware of these issues and ensure that AI systems are used responsibly and in compliance with legal and ethical standards.

While AI poses certain threats to lawyers, it also presents opportunities for them to enhance their work, improve efficiency, and focus on more complex and strategic aspects of legal practice.

Thursday, August 10, 2023

Why Should You Care About Security and Data Privacy?

 Ignorance can be expensive. Depending upon the severity of the violation, penalties for violating the GDPR, CCPA, and HIPAA can be in the millions



For GDPR violations, organizations can be fined up to 4% of their global annual turnover or $20 million Euros (whichever is greater). 

CCPA violations can be up to $7,500 dollars per violation. Why do you think California law firms are falling all over themselves to hire as many lawyers to file lawsuits as fast as possible. Although the fines are paid to the State of California, civil penalties can be up to $750 per California consumer class member, which can add up to a bunch in a data breach class action lawsuit. And that's just one of the things companies might be liable for with a data breach. 

Then there's HIPAA, that frequently violated statute, where fines can be up to $1.5 million dollars per violation

Saturday, August 5, 2023

Discussion with ChatGPT Regarding AI in Legal Research and Writing




There must be some middle ground between letting AI write a brief for you and banning all use of it. I asked ChatGPT about it.

Hasn’t AI been used online and with apps for years now?

Yes, that's correct! AI has been used online and with apps for many years now. It has become an integral part of various online platforms and applications, offering a wide range of functionalities such as natural language processing, recommendation systems, image recognition, and more. AI technology continues to advance, enabling more sophisticated and useful applications in the digital world.

What about with legal research search engines such as Westlaw and Lexis?

Yes, AI has also been integrated into legal research search engines like Westlaw and LexisNexis. These platforms have leveraged AI technologies to enhance their search capabilities, making it easier for legal professionals to find relevant case law, statutes, regulations, and other legal materials. AI algorithms can analyze vast amounts of legal data, improve search accuracy, and provide more efficient results, saving time and effort for legal researchers and practitioners. This integration has been particularly valuable in the legal field, where access to precise and up-to-date information is crucial for legal research and analysis.

Do you know some judges have banned use of AI?

Wouldn’t that rule out most kinds of legal research? As of my last knowledge update in September 2021, I am not aware of any widespread bans on the use of AI in legal research. However, it is possible that some individual judges or jurisdictions may have raised concerns or implemented restrictions on the use of AI in specific legal contexts.

The use of AI in legal research is generally aimed at improving efficiency and accuracy, not replacing the role of human legal professionals. AI technologies can assist lawyers and legal researchers by quickly sifting through vast amounts of information and providing relevant results. It's essential to strike a balance between utilizing AI tools and relying on human expertise to ensure the best possible outcomes in legal proceedings.


If there have been developments or changes in regulations regarding AI usage in legal research beyond my knowledge cutoff, I would recommend checking the latest legal news and official sources for the most up-to-date information.

Note: a June 2023 article says Judge Brantley Starr (any relation to Jeb) of N.D. Tex.banned ChatGPT assisted briefs and said it “hallucinated.” LOL. He made lawyers sign a pledge they didn’t use it. Absurd.

How long has AI been used in the major legal databases?

The integration of AI in major legal databases has been a gradual process over the past decade or so. While the specific timelines and implementation details may vary across different platforms, AI technologies started to make their way into legal databases around the mid-2010s.

Initially, AI was used for improving search algorithms, natural language processing, and data analysis to provide more accurate and relevant search results for legal professionals. As AI capabilities continued to evolve and improve, it became more prevalent in various legal research tools and platforms.

By leveraging machine learning and natural language processing techniques, legal databases like Westlaw and LexisNexis have been able to enhance their search capabilities, assist with legal research, and streamline the discovery of pertinent legal information.

It's important to note that the integration and development of AI in legal databases are ongoing processes, with continuous improvements and updates being made to adapt to the evolving needs of the legal community.

Friday, August 4, 2023

What’s a cookie?





I know a lawyer, who is supposed to be a great and knowledgeable lawyer when it comes to litigating technology cases. Not when it comes to his understanding (or lack of it) of cookies.

What are cookies?

But…this same lawyer once asked me, “What are cookies?” It’s true. Is he what he thinks he is? It’s not for me to say. This has led me to the conclusion that no one will ever tell this poor man what a cookie is. So…what is a cookie?

Cookies are defined as “small text files containing unique data to identify your computer to the network.

Cookies track you.

When you visit a website, the website's server sends a small file (the cookie) to your web browser and stores it on your device. This cookie contains information specific to your interaction with the website. When you revisit the same website or navigate to another page on the site, your web browser sends the stored cookie back to the website's server.

According to the website All About Cookies (www.allaboutcoorkies.org), when you visit a website, cookies can:

  •  Set your chosen language preference
  • Remember items in a shopping cart
  • Remember if certain settings are turned on
  • Authenticate your identity
  • Prevent fraud
  • Create highly targeted ads
  • Track how you interact with ads
  • Make personalized content recommendations
  • Track items you view in an online store
  • Auto-fill information in forms

ARE YOU AFRAID YET?

Cookies help websites tailor the site to induce purchases.

The website server then reads the information in the cookie to remember details about your previous visit. For example, it can remember your login status, language preference, or items you added to a shopping cart. This helps the website tailor the user experience and provide personalized content.

Persistent Cookies v. Session Cookies.

Cookies can be either "session cookies" or "persistent cookies." Session cookies are temporary and are deleted when you close your browser, while persistent cookies remain on your device for a specified period or until you manually delete them.


ALL Websites Use Cookies—even the reputable ones.

It's important to note that cookies can store personal information, but reputable websites usually use them responsibly and in compliance with privacy regulations. However, some users may choose to block or delete cookies for privacy reasons.

 


Thursday, August 3, 2023

The Corrosive Nature of Mean Emails.


It’s incredible some of the horrible things people say via email, and lawyers are some of the worst.





But when a work colleague attacks another colleague and then copies other staff people on an email, it is generally viewed as highly unprofessional and inappropriate behavior. This action can have several negative consequences:

1. Damage to professional relationships: Attacking a colleague in a public forum like an email can severely damage professional relationships. It creates a hostile and confrontational environment, making it difficult for colleagues to work together effectively.

2. Erosion of trust and teamwork: Such behavior erodes trust among team members and undermines the spirit of collaboration. It creates a sense of fear and insecurity, making it challenging for colleagues to trust and rely on each other.

3. Negative impact on morale: Witnessing or being involved in such an incident can have a significant negative impact on team morale. It creates a toxic work environment where employees may feel anxious, stressed, or demotivated.

4. Damage to reputation: The colleague who initiates the attack risks damaging their own professional reputation. It reflects poorly on their ability to handle conflicts or work well with others, which can have long-term consequences for their career growth.

5. Potential escalation of conflicts: Copying other staff people on the email can escalate the conflict and involve more individuals in the issue. This can lead to further misunderstandings, conflicts, or even a breakdown in team dynamics.


In most professional settings, it is expected that conflicts or disagreements be addressed privately and respectfully, through appropriate channels such as one-on-one conversations or discussions with supervisors or HR. Attacking a colleague and copying others on an email is generally seen as an inappropriate and counterproductive way to handle workplace conflicts.

Wednesday, August 2, 2023

Who "wins" class action lawsuits?

 






I believe corporations and other entities should be held legally accountable. Sometimes the only way to do that is with a lawsuit. At times, single lawsuits wouldn't work, and similarly aggrieved people must band together. This is what happens in a class action lawsuit, which allows representative plaintiffs to act as plaintiffs for their particular class. There are often multiple classes in a federal class action lawsuit. This is true in some state class actions as well. The settlements are often tens of millions of dollars, sometimes hundreds of millions of dollars. What do the class members get for all of this?

Not much. Between $13-$90 per person according to a 2019 empirical analysis done by Reuters.

And what is the size of the average class action settlement? $56.5 million. Furthermore, the median claims rate (according to the FTC) is 9%. Contrast that with the average personal injury settlement where the average settlement is $60,000 plus. Usually this would mean $20,000 for the medical bills, $20,000 for the client to walk away with, and $20,000 for the lawyer. Now, the standard fee is 40% if a lawsuit is filed, but, if the lawyer can resolve it without a bunch of time and costs, then it can be a good idea to split 1/3 1/3 1/3.

What do lawyers make in class action lawsuits? Well...the defense lawyers make hundreds of thousands of dollars defending these massive lawsuits, and the plaintiffs lawyers get between 35-40%  of the total recovery on average. The more claims that are filed the smaller the payout for the class members. Lawyers can elect to take a percentage or they can multiply their hourly rate times the hours they worked, and there is a formula that's applied. In larger states it's not unusual for lawyers to bill exorbitant rates ($500-$1,000). It's great for the lawyers, but most of the money goes to ID protection and credit monitoring neither of which help much.

So...what do you do when you hear lawyers talking about "truth, justice" and all those inflated and meaningless words in the context of many class actions? I'd be skeptical.

How Contributory Negligence Can Affect Your Personal Injury Claim in North Carolina

When pursuing a personal injury claim in North Carolina, understanding the concept of contributory negligence is critical. Unlike most state...