Thursday, April 13, 2017

FHIR Security model is enterprise centric

NO! This is a false understanding. FHIR has no security model. And this is a good thing.

FHIR is designed first and most important as a data model with a few expected interaction models (REST, Messaging, Document). There is expectation that many security models exist, and application of those security models does not impact the most important priority of getting the data model correct. This is especially exercised with REST, but is not limited to REST. REST is just used as a most likely first interaction model, and one that is understood to drive for a good transport agnostic data model.

There are many workgroups working on specifications for how to apply OAuth to FHIR REST, but these are not fundamental to FHIR, they are alternatives. There are various variations of OAuth as well, those that might be more Patient centric, those that might be more enterprise centric, and those that might be cross-enterprise centric. There are work on OAuth scopes. There are others that are working on pure mutual-authenticated-TLS for organization to organization. There are others looking toward SOAP. There are others applying security to the packaging so that it can travel by many transports with end-to-end security. Others are looking to smart-contracts in blockchain. Others just focused on enabling Privacy. Others tagging data so that rules can be applied. All enabled by the very fact that FHIR is not bound to one security model. This is an important fact.

I am sorry that it seem to FHIR is bound to an enterprise OAuth security model. I suspect this impression comes from the most visible project -- SMART-on-FHIR... which is enterprise centric. SMART-on-FHIR is a fantastic project, very important, and the one that really has the necessary engagement to 'make it real'. That said, these other projects are also doing good work. Not all projects have, or could have, the marketing power that SMART has... 

FHIR has many security models, while having none


Tuesday, April 4, 2017

Stop using OPT-IN and OPT-OUT

In various conversations on Consent, including #FHIR Consent, discussions often get mixed-up because we use the terms "OPT-IN" and "OPT-OUT". These terms are trouble. We need to stop using “OPT-IN” and “OPT-OUT”.

I want to propose a set of terms. I will never get everyone to stop using opt-in and opt-out, but where better terms can be used, I propose better terms. Better, as in, more descriptive and accurate communications.

The reason is that these terms can mean very different things based on what the person listening is thinking. They can mean a consent ‘model’ or they can mean a consent ‘state’ or they can mean an 'action' by the patient. Especially confusing because there is a possibility for all thee to be the same and not the same.

State Model --

In this model we look to consent as a state-diagram, also called a finite-state-machine, or a directed-graph. In a state-diagram is made up of a finite number of 'states' diagrammed as circles, with arrows indicating events that can occur.  A state-transition-table, and uml representations can also be used.

At the most gross level of Privacy Consent we recognize that there is a 'state' where data is shared, for legitimate medical treatment purposes, with trusted partners, who are authorized by their licensing and role. And another state where data is NOT shared, except for legitimate and authorized medical emergency...

Note I am defining a Treatment purpose of use, setting parameter that indicate that the sharing would be for legitimate and authorize purposes. This is to counter distracting arguments, distracting from my point. Insert any caveats necessary, and there is still an understanding of OPT-IN and OPT-OUT as a state of consent.

as a State:
  • OPT-IN state – Permitted to sharing the patient's data for Treatment purpose
  • OPT-OUT state – Denied to share the patient's data for Treatment purpose
I think this is better said using the terms Permit and Deny

Event Model

This might also be called the 'action'.  It is often predominately determined by regulation. 

Some view OPT-OUT as a model where absent an indication from the Patient, their data can be used. This is to say that the patient must OPT-OUT if they don't want their data shared.

Some view OPT-IN model as one where absent an indication from the Patient, their data can not be used.

You will note that this model uses terms that are also aligned with the 'first action' that a patient can do.
I think this is better represented by the "event" or "action" of the patient giving authorization, that is to "Authorize"; or the patient revoking that authorization, that is to "Revoke".

First state

This perspective uses the term to define the starting point, as the state.
  • opt-in environment, the patient is automatically put into opt-in state. 
That is improper definition, as it uses the term to define the term. So I will re-write it using the "Permit" state term
  • opt-in environment, the patient is automatically put into Permit state. 
This perspective is important to understand, but does not help with any clarity. As once the patient has made that first action then the distinction is not valuable. That is to say, the second or third or fourth action just confuse the perspective.

I propose we use:

States (Leveraging these terms as used in XACML):
  • Permit – a ‘state’ data is shared 
  • Deny - a ‘state’ of NOT sharing 
Model - Initial State
  • Implied-Consent – A ‘model’ where without a consent the patient data sharing is Permitted.
    • Start in Permit state
  • Explicit-Consent – A ‘model’ where without a consent the patient data sharing is Denied
    • Start in Deny state
The Initial State is usually driven by regulation. Such as Such as HIPAA, which is a model where patient data is allowed to be used for Treatment, Payment, and Operations without getting a consent from the patient.  It is common for HIPAA to be called an Implied-Consent environment, for the patient has implied their consent by seeking treatment.

Where as EU has as an Explicit-Consent dominant model. That is that no action on data without consent from the individual that data is about.

Explicit-Consent is also common with sensitive topics, that are considered more sensitive than normal health topics. Likely due to stigma. These topics are often held to an Explicit Consent model, even where normal health topics follow Implied Consent.

Also some regions, or even organisations simply choose to use an Explicit-Consent model for various reasons. Explicit-Consent can be seen as more Privacy Principled, but can also impede progress that might be 'implied'

I propose a set of terms, while not defining terms for the 'actions'. This because the actions are what tends to be very realm specific. Some environments allow a verbal consent, others allow a web-form checkbox, others require digital signatures, others have very specific language, others have special technology, others allow for delegation and assignee, etc. So the actions, or 'state transitions' are much harder to agree upon.

And, there are certainly more states...




Updated: 4/6/2017 to include recommended "Event" description and diagram

Monday, March 20, 2017

Healthcare Blockchain use?

Today starts the "Healthcare Blockchain Summit". I wish I could be there. What makes a good use of Blockchain, while also helping Healthcare? What are the questions Blockchain proposals need to answer? Blockchain is the hot word right now, Gartner indicates that it is still on the Peak of Inflated Expectations.
Gartner estimates 90% of enterprise blockchain projects launched in 2015 will fail within 18 to 24 months. Part of the problem is that the majority of enterprise blockchain projects don’t actually require blockchain technology. In fact, these projects would probably be more successful if they did not utilize blockchain.
Gartner give fantastic reasons, that I very much agree with. I am not going to duplicate them here as they do a great job.

Blockchain is a public ledger that is maintained by an interested network of systems. One can make a private blockchain, with private parties; this is possible, but I would argue takes much of the value out of the blockchain system. The blockchain can be validated by anyone, regardless of if they are just looking, just joined as a participant, or are a long standing validating node.

So what does someone proposing a Healthcare use of blockchain need to answer?

  1. What problem being solved? Why is that problem not best solved with 'classic' database? The excuse that the problem has not been solved today is not an acceptable answer. The problem has likely not been solved yet because it is not valuable to solve, so throwing expensive infrastructure like blockchain at it is unlikely to succeed.
  2. How is Privacy protected? The nature of blockchain is that the information put on the block must be sufficient for all parties to Validate. Putting purely encrypted information, or just pointers to data protected elsewhere is not helpful. This is why I recommend NEVER to put healthcare data on the blockchain. Just because healthcare data can't be put on the blockchain does not mean that there is no use of blockchain in healthcare. It is just not a treatment use-case.
  3. How is Identity managed? The nature of blockchain is that it is a system that does not require Identity to be known. There is an Identity linked cryptographically to information and signatures on the blockchain. One can always expose your own identity. Is this necessary with the proposed system? Exposing Identity might not be a bad thing, but it must be addressed either way. This fact means that blockchain is a ready made Pseudonym, hence why I proposed it be used to advertise availability of de-identified data.
  4. How is value created? Blockchain are expensive. How do participants gain value from their participation in the Blockchain? With Bitcoin, the value is equivalent to money, and is gained through proof-of-work, where part of that proof-of-work grows the chain with blocks offered by other participants, where each of those participants offer some bitcoin to have their block included. With a proposal to use Blockchain for Healthcare, one must have a good answer for how value is created, transferred, and consumed. It does not, and likely won't, be the same system as bitcoin. Hence why it is likely a different use of Blockchain for a healthcare specific purpose. Like my proposal to use it for Research Notebooks

Blockchain is good for?

I don't think that healthcare data should go into the blockchain. But there is good value in using blockchain to ( a ) advertise availability of data, ( b ) publish terms (smart-contract) of use that when met unlock access, ( c ) Merkle tree signatures used to validate authenticity of data managed elsewhere, ( d ) track revisions, and ( e ) record (audit) access and use.

Healthcare Blockchain - Big-Data Pseudonyms on FHIR
Blockchain and Smart-Contracts applied to Evidence Notebook

Friday, March 3, 2017

Multiple formats of the same Document content

I propose that “The most technically advanced” document format be considered the Prime, with all of the other formats considered Transforms (XFRM) from that prime document. Thus if the Document Source can create a C-CDA 2.1; then that becomes the prime. Yet if a Document Source only can create a C32 and PDF, then the C32 would be the prime. In this way, regardless of if the secondary formats were actually derived from that prime document, they would be Registered as if they were. This enables a Document Consumer to follow the XFRM link to the Prime without needing to understand all the formats presented. The Document Consumer can also follow the XFRM links down to all the ‘equivalent’ formats to discover those to choose from.

details.....

Now that C-CDA 2.1 is emerging, the following situation becomes more prominent. The situation is that the same content could be encoded in various document format types.
  1. How do you publish in XDS/XCA a set of documents that cover the same content but are different in their encoding format? 
  2. How do Content Consumers perceive when they find a set of documents that seem to cover the same content but are different encoding format? 
  3. How do we prevent miscommunication, or misinterpretation, or worse duplicate attribution. 
Various document encoding formats:

specification
year
mime-type
format
C-CDA 2.1
2015
text/x-hl7-text+xml
urn:hl7-org:sdwg:ccda-structuredBody:2.1
C-CDA 1.1
2013?
text/x-hl7-text+xml
urn:hl7-org:sdwg:ccda-structuredBody:1.1
CCD
2007
text/xml
urn:ihe:pcc:xphr:2007
C32
2007
text/xml
urn:ihe:pcc:xphr:2007
CDAR2 structured
2005
text/xml

CDAR2 unstructured
2005
text/xml
urn:ihe:iti:xds-sd:pdf:2008
FHIR Document
2017
application/fhir+xml
application/fhir+json

PDF - rendered view of C-CDA using publishers stylesheet
2001
application/pdf

XDS-I
2005
application/dicom

CCR
2005
application/x-ccr

Bluebutton text
2013
text/plain



As you can see, C-CDA 2.1 is not really special, but it happens to be the thing that has just released and C-CDA 1.1 are laying around. As proof, FHIR Documents will re-open this discussion. Especially with the CDA-on-FHIR efforts. Thus although C-CDA 2.1 isn’t special, it is a nexus today.

Example using a Discharge Summary:

As an example of a document that might need to be published in multiple formats is a Discharge Summary for an Episode of Care. This use-case is the most clear as to why the very same content might be made available in multiple formats. Other document types are also possible.

Why publish multiple formats?

The main reason to publish multiple formats is for the benefit of various Document Consumer systems. Given a Health Information Exchange, or Nationwide Health Information Exchange, there will be a variety of capabilities and use-cases for the hundreds-thousands of various Document Consumers. Some of these Document Consumers might not be updated at each revision of the C-CDA specification, thus they can only consume an older format.

All this for the benefit of the Document Consumer, but it creates a problem for the Document Consumer too. How do they know that the very same content is represented in the different formats, vs that the different formats are actually about different content? Ideally they would have some way of discovering this short of retrieving all documents and comparing them.

A user should not be bothered by making a choice between various encoding formats, all for the same content. It would be best if the Document Consumer could automatically pick the ‘best’ format. This pick, might be:
  • simply because that Document Consumer only supports one format. Example might be an old piece of software that can only consume C32 (aka XPHR). 
  • might be because a Document Consumer is able to render one format better than another format for a given context. For example, a patient view versus a clinical view. A Patient Generated Health Data (PGHD) CDA document vs a CCDA CCD. 
  • might be a good workflow reason to show a PDF rendered view, as that specific rendered view was that of the Document Source (publisher). 

Not rewriting history

It should be noted that I am not talking about going back in history to create more formats of documents previously published. Revising history is against medical-records principle.

Those old formatted documents must forever be supported by Document Consumers. That is to say that a Document Consumer should never remove the functionality it has to consume older formats.

What I am focused on here is the front-edge of standards advancing. What happens as ‘new’ formats become supported by Document Source. And how to best support Document Consumer needs.

Potential Solution

It would seem that the closest representation in XDS is the transform (XFRM) association, because it means two representations of the same information, as opposed to RPLC, APND, etc. However, it may not always be right to say one is a transformation of the other. They could all have been created at the same time, from the same underlying EHR data, simply for the purpose of satisfying the largest range of clients. In this case, which one is prime?

That said, a Transform (XFRM) association in XDS does have a directionality component. It has a source side, and a transformed side. Thus to



use the Transform (XFRM) association we need to determine a directionality. I look to IHE PCC and IHE ITI to see if there is a precedent. There is similar use of Transform (XFRM) in XDS-SD, and also APPC. In both documented cases the directionality component is left to ‘local policy’. So it would seem that the IHE committees have not yet decided.


I propose that “The most technically advanced” document format be considered the Prime, with all of the other formats considered Transforms (XFRM) from that prime document. Thus if the Document Source can create a C-CDA 2.1; then that becomes the prime. Yet if a Document Source only can create a C32 and PDF, then the C32 would be the prime. In this way, regardless of if the secondary formats were actually derived from that prime document, they would be Registered as if they were. This enables a Document Consumer to follow the XFRM link to the Prime without needing to understand all the formats presented. The Document Consumer can also follow the XFRM links down to all the ‘equivalent’ formats to discover those to choose from.

This all said, there could be some policy reason why a different format is considered to be the prime by the Document Source. For example that the Document Source publishes in C-CDA 1.1, and uses a stylesheet transform to produce the C-CDA 2.1. This said, a Document Consumer should be able to rely on the top most (Prime) Transform as the most complete and accurate.

Robust Document Consumer

Given that whatever guidance we advocate would be adopted over time and not uniformly, a Document Consumer needs to handle whatever is available, and be robust to formats that are not understood. Unfortunately, there is probably not a fully deterministic way to go. For example, a given Document Source might adopt this guidance but other Document Source might not, so some but not all equivalent documents would have associations.

Unresolved technical issues:

The various formats are not fully equal. Clearly a PDF format doesn’t carry the fidelity of data that a C-CDA 2.1 can. There might be use case where this difference is not a problem, but any loss of fidelity is potentially problematic. Thus there must be some recognition that the various formats might all be “Transforms” (XFRM), but are not equal. This is why I recommend the prime be the most technically advanced, so that the number of hops away from the prime is an indication of potential loss of fidelity.

There is no obvious metadata place for this ‘completeness’ or ‘accuracy’ or ‘integrity’ evaluation recognition to be placed. There are Vocabulary available in the Value-Set (integrity) recommended for ConfidentialityCode… I am not yet ready to recommend this.

Conclusion

This is just a recommendation. It might kick off a discussion in IHE to write similar recommendations. Not clear if this is a ITI or PCC responsibility.


Attribution: Tone Southerland and Joe Lamy both helped me with the content. Thank you!


Keith covered this in a different way back in 2009. focused more on template inheritance -- Template Identifiers, Business Rules and Degrees of Interoperability -- with a cool graphic

Wednesday, January 25, 2017

Enabling Point-Of-Care Consent

Gathering Privacy Consent is never easy. A Patient, when they are healthy, has no interest in giving Consent for future actions. Mostly because they don't want to admit they might get sick in the future. Secondarily because they don't want to do unnecessary paperwork. Realistically, they just want healthcare to work, and not get in the way of them getting the best treatment. This is why many exchanges are moving toward an 'implied consent' that allows a patient to explicitly withdraw their authorization, but in the absence of any action by the Patient the data would be shared for "Treatment" purposes. This default behavior is only applied to "Treatment", not "Research" or other.

That said, Consent is still sometimes needed. It might be needed because the organization uses a Default of not sharing. It might be because the patient has sensitive health topics that require explicit consent to release. It might be because the patient has Withdrawn their authorization, but now wishes to enable one provider organization access for a visit, careplan, or episode of care.

Given XDS and XCA interactions that are often used in an Health Information Exchange (HIE), or a National Health Information Exchange (NHIE); there is no standards/profiled way to enable a
point-of-care consent gathering workflow. So today, if Consent is not already captured, and needed, then data access is blocked. Today the patient must go to the custodian organization and fulfill their consent workflow needs. This might be easy, through a web tool or phone call, but no matter how easy it is difficult when the Patient is not feeling well.

Consent Negotiation

So... there is a need to enable negotiation between a Custodian that needs a consent, and the Requesting organization that is at the Point-Of-Care... This is the problem that CareQuality is trying to enable. My understanding is that much of this comes from the experience of Epic in their CareEverywhere system. This is getting designed in CareQuality now. The approach should become a standard that anyone can use, hopefully through IHE XDS/XCA/XUA.

The basics are shown in the following interaction diagram. It starts with a normal XCPD or XCA request. The Responding Gateway will check if Authorization(AuthZ) is already enabled. In this case everything is okay, except that a Consent is needed. I say 'everything else is okay' because one needs to make sure the requesting organization is authorized to even ask, and are authorized to get point-of-care consent.

The new thing, highlighted in YELLOW, is that the Responder can inform the Requester that getting specific consent types would allow more information to be exposed. This can be detected by the Requesting organization, it is also backward compatible so that a Requesting organization that doesn't know about this new capability can continue as today with no data available. A Requesting organization can look at the policy choices offered, it can get one of them from the patient, it stores the result locally, and tries the same transaction again with an Assertion that they have achieved the specific consent. The Responding gateway would now see the Assertion and allow disclosure of the data according to the asserted policy. From that point forward that partner would include this Assertion in all requests, and the Responder would continue to disclose under that policy.

Closing the Loop

This works on a Partner-by-Partner basis. This also relies on the Partner that gets a consent to maintain that consent onbehalf of the Responding organization.

An augmentation being discussed is to somehow get the Consent paperwork back to the Responding organization. This might be through the exchange, this might be by postal mail or FAX.  CareQuality is going to enable a the exchange based pathway, through adding additional elements to indicate that the paperwork is available online. This additional element might be false to begin, and change to true a day or a week later.

It also easy to Query for consent documents. The Provider X might set a timer and query each day until it appears.

Once this is received by the Responding organization it is possible for that organization to record the consent and have it affect ALL partners. This is not part of the CareQuality system, but rather is a potential policy decision a Responding organization could make.

This is a developing system, so it is not fully defined. I expect it to continue to develop this winter and spring. I would hope it is then brought to IHE for standardization next year.

Past articles on Patient Privacy controls (aka Consent, Authorization, Data Segmentation)

Tuesday, January 24, 2017

FHIR Connectathon has changed and it is good

I have been unable to attend HL7 WGM for a year, a problem that is now better.  This means that when I attended the FHIR Connectathon 14, prior to the HL7 Workgroup Meeting in San Antonio TX, I was shocked to experience the new FHIR Connectathon. This is good change on many levels. The others from the FHIR core-team have seen the changes over-time, so the transition is not as shocking as it was for me.

In past years, FHIR connectathon was more focused on people trying to use the core FHIR specification. This resulted in discussions to help understand the core concepts, core interaction models, core vocabulary, core data-types, and core Resources. This also resulted in corrections to the FHIR specification, so that future people would understand right from the specification. Other discussions were on toolkits, and how to make the toolkit better. There were discussions on how to implement a general purpose FHIR server. There were discussions on how to implement clients, starting from general concepts. The rumbling within the room was very focused on the FHIR core, and basic understanding.

This year, FHIR connectathon was more of a 'workgroup' meeting. More specifically, an "Agile" "Open-Workplace". There were a 16 distinct tracks. These are focus areas. All of them are broader concepts than one Resource, most are more of a 'workflow'.
Each of these were centered around one or more round tables. Where the table included various implementers interested in the topic, and a leader for the topic. What happens is very natural, they would discuss what the goal was, and adjust the goal organically as they progressed. They would try the basics that are described in the above links, but they would evolve based on the results. 

What really was a shocker is that Security and Privacy became a topic at most of these tables. 

Historically we have just pointed at SMART for security. SMART is a 'good enough' solution for the purposes of hackathons. SMART is not likely to be a good enough solution for production. This is a topic to be covered in much more detail in the coming months. SMART is now a project within HL7, so it will need some formal recognition, and framing. 

There was also interest in the basics of service discovery, patient data location, policy bridging, service response behavior, etc. All the kind of things that an operational environment will need to figure out. None of these were decided conclusively. BUT the very fact that they were talked about is evidence that FHIR Connectathon has changed... and it is good... 

FHIR has matured, it is growing up. It is not yet ready to fly away from the nest. But it is definitely no longer an infant.


Monday, January 16, 2017

IHE on FHIR

IHE is still relevant in a FHIR world. But FHIR has changed the world, and IHE needs to adjust to this new world.

Profiling is still needed

The concept of profiling FHIR is still needed. The difference today is that FHIR is ready and instrumented to be Profiled. It even has a set of Profiles coming from HL7. This is not a threat, this is an opportunity for IHE.

IHE has a set of 10 profiles today (MHD, PDQm, PIXm, mACM, (m)ATNA, CMAP, GAO, RECON, DCP, MHD-I). These mostly are basic profiles, where a basic profile is not a complex profile, one that simply points to a FHIR Resource as the solution for a defined problem. This is helpful to the audience, but not adding much value. It is this lack of value-add that makes what IHE has done today with FHIR Profiling to seem to conflict with core FHIR.

Given that FHIR needs to be Profiled is a fact. A Profile is fundamentally a set of constraints and interactions applied to a Standard, to achieve a specific purpose. This is just as true with FHIR as with any other standard. 

IHE Needs to focus on Larger profiles. The basics of transaction (http RESTful -- for FHIR) is not helpful. But there are many larger workflows, larger data integrity, larger system-of-systems.

FHIR is Profile Instrumented

FHIR is ripe for being Profiled, and is ---Instrumented--- to be profiled. That is that it has mechanisms built into the specification to make Profiling more easy, effective, repeatable, reusable, and discoverable.

IHE needs to modernize

  • Audience expects use of Conformance resources and tools (see above) 
  • can only publish PDF – FTP is not good 
  • does not have a usable mechanism for creation, publication, and maintenance of vocabulary, value-set, and URL 
  • can’t publish conformance constraints in technical format (XML, JSON) 
  • PDF clunky, not hyper-linked, not helpful with programming tools 
  • Audience can’t find IHE if they start at the FHIR specification 
  • Yet, those Implementation Guides that use FHIR.ORG are found 
  • IHE has little development and deployment community engagement 
  • FHIR has multiple tools to help outreach and interaction with community 
  • IHE has email 
  • IHE workgroups are randomly engaged and acting different 

IHE needs to leverage FHIR 

IHE should not fear HL7, HL7 should not fear IHE. But working together with defined and distinct purpoose is best for both.

IHE supports FHIR.ORG. It is my understanding that FHIR.ORG is where activities outside of the specification will happen. Much of the things IHE does are in this space. This includes the tooling capability and support for profiling, but also involves community engagement.
IHE Profiles of FHIR must be ‘built’ using FHIR “IG Publisher” . They would be Balloted and published by IHE. The normal IHE supplement (pdf) might be okay, or need slight changes to Volume 2 (Volume 1 is good). Possibly a new Volume 5 to “hold” technical pointers from FHIR “IG Publisher”. This because today's format of Volume 2 is overly complex for a FHIR profile, especially http REST transport. Besides the Supplement form, the Technical bits would be published on FHIR Registry with pointers back to the IHE
IHE and HL7 Leadership engagement needed. We have gotten this far based on Individual efforts, but the scale of the solution is to big for individual efforts. We need formal agreements, formal sponsorship, and formal recognition. I think the time is ripe now.