October 5, 2023

Podcast - Artificial Intelligence in Healthcare and How to Comply with HIPAA and State Privacy Laws

Counsel That Cares Podcast Series

In this episode of "Counsel That Cares," HIPAA and healthcare privacy attorneys Beth Pitman and Shannon Hartsfield dissect the highly publicized Dinerstein v. Google case. They address the implications and concerns of sharing public health information (PHI) to be used for research in developing artificial intelligence (AI) for healthcare as it relates to complying with HIPAA and state privacy laws.

Morgan Ribeiro: Welcome to Counsel That Cares. This is Morgan Ribeiro, the host of the podcast and a director in the firm's Healthcare Section. Today, I am joined by Shannon Hartsfield and Beth Pitman, both partners in Holland & Knight's Healthcare Regulatory and Enforcement Practice Group. Both are attorneys who regularly advise clients on a variety of healthcare compliance matters, including the Health Insurance Portability and Accountability Act, or HIPAA, and data strategy matters.

A highly publicized court case, Dinerstein v. Google, caught our attention, and I wanted to get Beth and Shannon's take on the case and what it could mean for our clients in the healthcare space. To get us started, Beth, I think it would be really helpful for our listeners if we first summarize what this case is all about.

Beth Pitman: Thank you, Morgan. In 2017, Google teamed up with a hospital system on a project to test and develop some artificial intelligence technology. They wanted to be able to use their EMR information in order to engage in predictive analytics for healthcare. The collaboration focused on using machine learning techniques to predict hospitalizations identified by instances when a patient's health is declining. Google and the hospital exchanged HIPAA information in a form called the limited data set. This is a process that's required by HIPAA, and it's entered into through the use of a data use agreement in exchange for providing the data. And the data use agreement says the hospital received a non-exclusive perpetual license to use the trained models and prediction technology created by Google.

Both organizations considered this to be a form of research. So the data-sharing collaboration involved patients that were seen at the healthcare provider from 2009 to 2016. The defendants published a study of their results in 2018. And then in 2019, soon after that, the defendants were sued in the United States District Court of the Northern District of Illinois in a class action lawsuit that accused the hospital of sharing hundreds of thousands of patient records with the technology giant that retained identifiable date stamps and doctor notes. These are both areas of information that are included in the electronic health records.

The plaintiff argues there is no expert determination that the information could be identified. For a limited data set. That's not needed, however, because the limited data set, as we'll discuss later on, can include a larger amount of information that can be somewhat identifiable. So, the plaintiff in this case alleged a variety of claims under state law in Illinois. The defendants then moved to dismiss the claims, and this is a process that's raised early on in litigation. The plaintiffs also alleged a variety of claims. Alleged a breach of duty of medical confidentiality. They alleged breach of contract arising from the use of the privacy practices under HIPAA and the patient authorization. They alleged an invasion of privacy, and some contract injuries were based on alleged bail of the PHI by the defendant.

In a motion to dismiss, it's important to remember when you're looking at these cases that the plaintiff's facts have to be deemed to be true by the court. So there is no evidence presented. There's nothing to support whether or not the facts are true or not true. Courts have to deem those to be true. And then the court can dismiss the claim if there is no legal basis for the claim. In this case, both the underlying court and the Seventh Circuit dismissed the claims because they found the plaintiff had not shown injury and did not have standing to raise the claim. We'll talk about that a little bit later on.

Morgan Ribeiro: Great. I think that's really helpful. And there's obviously a lot for us to unpack here. Shannon, I want to turn it over to you.

And, you know, these partnerships between health systems and tech companies are becoming fairly common in the healthcare industry as we kind of push forward to use data analytics and machine learning to improve clinical diagnoses and better predict disease. Many hospitals are exploring deals related to artificial intelligence. That kind of seems like the buzzword of 2023. What concerns does this particular lawsuit call out about patient privacy and patient data in the research context?

Shannon Hartsfield: The defendant stated in an article that they were using de-identified data for this collaboration, but it's important to note that a limited data set is not completely de-identified. In fact, it's still protected health information under HIPAA, what we call PHI. If information is completely de-identified, then HIPAA doesn't apply at all, if it is, in fact, de-identified. But to de-identify it, you have to remove a bunch of specific data elements, including dates related to the patient.

And therefore, under this fact pattern, the information had the date stamps in there, had some free text fields in there, which may or may not be de-identified. So it was a limited data set, which can still be used and disclosed for research, healthcare operations, etc., and we'll talk about that. So it's still protected health information. It's not completely de-identified. And sometimes the hospital systems, when they are entering into these partnerships for artificial intelligence, the companies that are creating artificial intelligence solutions need data to train their models. And you have to think very carefully about what you're turning over and whether there's a HIPAA-compliant pathway to do that.

An implied question seems to be whether data can really be truly de-identified ever when you're providing it to a tech company like Google that could potentially re-identify it. Under what we call the HIPAA de-identification safe harbor, you have to remove 18 specific identifiers, but then you also can't have actual knowledge that the information could be used alone or in combination with other information to identify patients. So, the question becomes, when I'm handing this de-identified data over to the recipient, can they re-identify it? Or is there an expert that will come along and document that the information is sufficiently de-identified?

But in this case, we're not really worried too much about the de-identification requirements because it wasn't fully de-identified. It was a limited data set, which was disclosed pursuant to a data use agreement, which is perfectly permissible under HIPAA as long as the data use agreement complies and the limited data set complies. When healthcare providers and tech companies continue to work together, the key is to look at the details of the specific situation you're dealing with and then carefully analyze those facts to make sure that you're complying with HIPAA.

Morgan Ribeiro: So, I want to pause right there, and Beth, maybe you can answer this question, but can you define for our listeners what is a limited data set?

Beth Pitman: A limited data set is a term defined in HIPAA, and it includes certain direct identifiers, such as the name, the postal address, social security number, medical record number. It can include some other demographic information, like your ZIP code, other elements of date in a medical record, like admission, discharge, date of service, and other items. It is not completely de-identified, but it does include a limited amount of information, which is why it's called limited data set. And this is perfectly permissible to be used, as Shannon pointed out, under HIPAA for research purposes and also for other types of purposes, as long as there's a data use agreement in place that hits the requirements for HIPAA and does sufficiently protect the information.

Morgan Ribeiro: I feel like there's so much to find around the different types of protected information as well. And there's a lot more to this story and the evolution of how things have happened since this lawsuit was originally filed back in 2019. The lawsuit also claims that Google has the ability to re-identify patients. Can you tell us more about that?

Beth Pitman: Yeah, sure. So under the HIPAA rules, HIPAA gives you a list of identifiers, and technically, under the rule, if those identifiers are not present, the information is de-identified.

In this case, there was no allegations that an expert opinion had been obtained to state that, yes, information that was provided to Google was correctly de-identified. But assuming that it was, and even if it was, it may have been de-identified when it left the provider, but when it goes into the Google AI, Google has access to a tremendous amount of information about all of us, and this particular plaintiff alleged that Google had access to geolocation identification and other identification information that could have been used then to re-identify the data.

So, for instance, I think his allegation was that he had received service at a certain date and time, the location, that would identify him being at the healthcare provider. And in that context, his allegation is that that would sufficiently re-identify the information, identify him, and see him present as it helps to provide a location for services. So that is his underlying allegation is that Google has the ability to re-identify information through that process.

So this does raise some questions with regard to the de-identification safe harbor and when it can apply in the context of AI. Because the way the AI works, and the thing that's so great about AI, is it can accumulate a large amount of data, not just from one source. The de-identified data comes initially from the healthcare provider, goes to the AI system, may have been de-identified correctly at the healthcare provider location, but then because AI is learning and has access to so much other information that's independent of the healthcare provider's information, that information then could be used to re-identify an individual. I think that's one of the underlying concerns with AI. Can it truly fall within the privacy parameters? In any event, in this litigation, there was an acknowledgment that the information was not truly de-identified. It was a limited data set, and it was being used pursuant to a data use agreement. That is really not an issue as far as litigation is concerned.

It does appear, though, that Google has plans to develop its own EHR for clinicians that would gather patient medical records and then leverage their machine learning process to predict clinical outcomes. That was the concept for this technology that was being used as part of the research project. The lawsuit does also allege that the hospital did not notify its patients or get their consent to release information to Google. That, again, is an allegation, not necessarily a true factual statement. That is an allegation in a complaint.

Morgan Ribeiro: OK, so Shannon, I want to pause there again and get a definition here real quick. Can you define the difference between PHI and PII?

Shannon Hartsfield: Some of the commentary regarding the case or other cases like it might refer to PII instead of PHI. PII or personally identifiable information, in the HIPAA context, I would argue that there is no difference with respect to patient data. Protected health information includes subsets of personally identifiable information and health information, including demographic data. So, sometimes people say, well, this is just PII, it's not PHI. But if it's held by a hospital or relates to a patient, even if it's just patient names and addresses and things like that, that aren't the juicy medical information, it's still PHI. So I think for this case, we'll stick with PHI.

Morgan Ribeiro: Thanks for that definition and clarification. OK, so, Beth, turning it back over to you. What has happened most recently with this case?

Beth Pitman: So, the District Court in Illinois granted the defendant's motion to dismiss, and then the plaintiff appealed to the Seventh Circuit Court of Appeals in that area. The federal Court of Appeals in the Seventh Circuit then recently affirmed that dismissal and agreed that with the underlying order of the District Court.

However, the Seventh Circuit looked at this from a different perspective, and they came to the same decision but made a different sort of analysis of the law. They did find that the plaintiff had failed to allege an injury with regard to each of the claims. That's a specific requirement by law. As we talked before, in a motion to dismiss, the court has to assume that all the facts are true. So, then they have to make a decision about whether or not, based on the facts that are alleged, did the plaintiff actually state a claim that's supported by the law. In this case, they found that because there was no injury alleged, then the plaintiff did not have a claim that would allow the case to go forward. So, among other things, the plaintiff had alleged that the healthcare provider had improperly sold PHI through Google in exchange for technology licenses. They also considered whether a HIPAA violation itself creates injury to the plaintiff or if that would also independently create a breach of contract. Interestingly, the court really didn't come down on the decision about whether or not the HIPAA violation creates a breach of contract. They instead found that there just wasn't an injury. They never really made that decision.

The court says it appears to be a win for privacy compliance officers, but we really caution against thinking that this creates an open highway with no red light for the disclosure of an EHR or access to an EHR record for AI development. In fact, it raises a lot of questions that we need to review and consider.

All the claims raised by the plaintiff are state law claims. There was really not any decision regarding whether or not this might actually be a HIPAA violation or may result in some sort of a violation that the Federal Trade Commission could enforce. But the federal issues were not decided. This is all based on state law, and then again, the court found that there really was no injury. The plaintiff alleged a common law privacy claim, and he also alleged that the combination of data with the medical records alongside the geolocation and demographic data collected by Google independently through a smartphone app created a perfect formulation for re-identification of data. The court found that this was really a hypothetical risk. There was no allegation that this actually occurred or that would actually create a risk of re-identification for the data.

The court also found that the plaintiff lacked standing to bring a contract claim, that there was a breach of contract arising out of those privacy practices and the authorization that he had signed at the time he was admitted to the child care facility. He alleged that they'd created a contract and that there was a HIPAA violation that then breached that contract. But the court again only found that there was no injury, in fact, alleged. The court rejected that plaintiff's claim also because he did sign a release. He signed the authorization, signed the release, that the health provider could provide the information for use in research. The court did not explicitly reject or accept plaintiff's arguments that the no privacy practices and the authorization create a contractual agreement or relationship. This was a creative sort of claim.

The court also denied the plaintiff's claim that he had a contractual right to be paid for use of his PHI and that issues by Google and the defendant deprive him of an economic value in his PHI. The court noted that in Illinois, the patient does not have a property interest in medical records. That interest is retained with the healthcare provider, and for that reason, they reject the argument that the defendant had any kind of pecuniary interest in his own medical information. And they also rejected the argument that the defendant received a financial benefit, the license to the AI, through an unauthorized disclosure of the PHI, that plaintiff was entitled to some payment, such as royalty, for this use of his information.

Morgan Ribeiro: OK, now I would love to hear from each of you, and Beth, I'll start with you. We've talked about a lot of sort of legal elements to this, and each day is a litigation, but why is this decision significant for privacy and healthcare professionals and organizations?

Beth Pitman: When you're thinking about the use of AI in healthcare and how AI develops and learns, it's all based on disclosure of information through the AI system and the technology. It raises a lot of issues, and it addresses concerns related to when a healthcare provider can appropriately disclose PHI through the technology and how it can be disclosed. This is all still an evolving area. Of course, the uses of AI have not really been fully fleshed out, and that's going to be something that's going to be continuing to grow and develop over many years.

The court did reject speculative claims. At that point, only the possibility that anonymized data could be used to re-identify plaintiffs or the individual patients. The data holder was not held accountable for merely possessing the data or allegedly having the ability to re-identify the data. The court made it fairly clear that there needs to be some sort of bad intent or actual bad act that occurred at the same time, an actual intent to reidentify the data. This is significant considering how, when AI uses data for its learning process and the potential cumulative impact of all data sources on the initially de-identified data that's accessed and used by AI. The decision underscores the importance of having a HIPAA-compliant notice of property practices and authorization and making sure that adequate notices are given to patients to explain what information is collected, how the information is processed and what information may be used for.

So this is probably a good time for healthcare providers in general to review their notes of privacy practices and authorizations and determine if they are making correct and appropriate disclosures. The plan failed to allege that Google actually took action to use the data to identify the patient, but there was no allegations that the anonymization process was faulty, and the university, again, had policy procedures in place designed to give notice to the patients, which included their notice of privacy practices and the authorizations that were signed, in this case, by the plaintiff to authorize the release of the information. So the healthcare providers in these cases, based on the allegations, took adequate precautions to safeguard and protect the patient information by those without potential uses. By doing this, both the healthcare providers and the technology vendors can avoid some potentially significant damages. And again, in this context, there really wasn't any sort of discussion by the court of whether or not this actually resulted in a HIPAA violation that could be separately enforced by the federal agency.

Morgan Ribeiro: Shannon, what are your takeaways or, based on this case, kind of advice that you have for privacy and healthcare professionals and organizations?

Shannon Hartsfield: I think it is a helpful case in that the plaintiff wasn't successful. And again, I would caution healthcare providers against looking at this case as precedent for being able to use limited data sets for AI without taking a careful look at other aspects of the case.

I'm a total HIPAA nerd, so I actually really enjoyed reviewing the lower court's opinion issued three years ago by Judge Rebecca Pallmeyer. So even though the plaintiff lost in both the lower court and the appellate court, the lower court's opinion, in my view, should give privacy officials pause. It's worth a read. Even if a plaintiff can't necessarily get damages for some sort of HIPAA violation, as Beth said, the Department of Health and Human Services Office for Civil Rights can certainly go after them. The Federal Trade Commission can certainly go after them if there is a HIPAA violation. And so the lower court raised some important questions, particularly about whether this whole deal was a sale of protected health information.

Under HIPAA, you can't sell protected health information without patient authorization, with some very limited exceptions. One of those exceptions allows you to sell protected health information for research purposes, but the money that you get or the remuneration that you get has to be a reasonable cost-based fee to cover the cost to prepare and transmit that PHI for the research purposes. And the court actually found that the plaintiff's claim that the PHI was improperly sold was on firm ground. That's the language the lower court used. The court observed that the license that the hospital got to use these trained models and predictions was indirect compensation, and the court noted that remuneration doesn't have to be money, it can be an in-kind exchange. Google didn't explain why this exchange of PHI for the right to use these models was the equivalent of a reasonable cost-based fee, rather than direct or indirect remuneration that would require patient authorization. It was still a sale, arguably, even though the disclosing entity retained the ownership rights to the data.

Another interesting aspect of the court's opinion was something Beth touched on, the emphasis on the wording of the HIPAA notice of privacy practices. All too often, covered entities just maybe copy somebody else's notice of privacy practices and stick it up on the wall and on the website, or they just copy the government's model notice of privacy practices without really reading it, without tailoring it to their particular situation. And in this case, the defendant's notice of privacy practices promised that the hospital would obtain a patient's written permission for the sale of medical information, and the court concluded that this type of general language is more stringent than HIPAA because it didn't reference the HIPAA exception for the sale of PHI for research in exchange for a cost-based fee.

Now, I'm wondering, is the court expecting these notices of privacy practices, which have to be posted on the wall, which have to be handed to patients and have to be posted on websites, at least under current rules, is the court expecting those notices to contain all the itty bitty nuances of HIPAA in that one document? It seems like that's what the court's saying. And something to notice is that the language that the hospital used in its notice of privacy practices about sales of PHI appeared to be the same as what's in the government's model notice. And I would say that the notice is just designed to be a general explanation of how PHI is used, and it doesn't necessarily have to contain every single exception.

However, partially as a result of the lower court's opinion back in 2020, we started looking at this issue in our model notices of privacy practices, and we stick in a caveat in there that says that written permission will be obtained for the sale of PHI, except if permitted by HIPAA. And I'm hopeful that a court would say that "except if permitted by HIPAA" language would give us enough flexibility there to do things that HIPAA does allow and that aren't a sale.

So, the case definitely raises a number of interesting HIPAA regulatory issues that would need to be considered for any project involving artificial intelligence or machine learning, including whether a particular endeavor triggers HIPAA's research provision. Maybe you'll need an IRB approval, an institutional review board approval, or patient authorization to do research. Whether the data is properly de-identified, if you're going to go the de-identification route. Whether you have the right agreements in place between the entities, whether the data transfer is a prohibited sale. And the key, in my mind, is what promises have been made to the patient through the notice of privacy practices or otherwise, and are you living up to the promises that you've made to patients regarding their information.

Morgan Ribeiro: Thank you both. I think, as you both have mentioned, this is just one case, right? And there's certain takeaways we can have and interpretations of how this can apply in other situations, but the story is not ending here. I think given the rise and increasing interest in artificial intelligence, for example, and machine learning and how that applies in the healthcare context, I think we're just getting started here. And it's really an important kind of learning lesson and something for our healthcare clients and professionals and data and security experts and compliance officers, you name it, it's definitely something that we need to be aware of and learn from. Any last parting thoughts from either of you?

Shannon Hartsfield: Just that when you're dealing with artificial intelligence, it's definitely a new topic. AI has been around a long time, but how AI works with HIPAA and state privacy laws is something that we're going to continue to look at.

Morgan Ribeiro: Great. Thank you both so much. Appreciate your time today.

Related Insights