Monday, March 19, 2018

FHIR really was positively different

I had a short but very satisfying interaction with a developer at HIMSS 2018. They had implemented a pilot project using FHIR. Their use-case was to instrument the DoD systems with a FHIR Server API, and similarly instrument a VA Vista system with a FHIR Client. The goal was to show how providers at the VHA could see the DoD data while using the Vista experience they are familiar with. 

They found that adding a FHIR Server API in the front of the DoD system to be quite achievable. 

They found that placing a FHIR Client API behind an instance of a VHA Vista to be quite achievable. I spent a bit more time to understand this, as I have been working within the VHA for over a year. What he actually did was stand up a new instance of Vista. It should be noted that each VHA site has their own instance of Vista. Vista is an open-source project. So it is easy to stand up your own instance of Vista. What he did differently is that rather than have a data-base under that Vista instance, he placed a service that implements FHIR Client. Thus as Vista would want to hit its own database, he would intercept that database request, and he would make the equivalent FHIR API call, providing the response to FHIR Client request as the database response. I suspect he did some optimization, but you get the picture.

He had this fully working, and it worked very fast. A VHA user could interact with this instance of Vista just as if it was their own site data. The interaction, User Experience, was identical to what they are used to.

Knowing that VHA might be switching over to Cerner, and knowing that Cerner has a FHIR sandbox available... He directed his Vista FHIR Client to speak to the Cerner sandbox FHIR Server. With only making the endpoint configuration and security token settings; he found that his Vista instance worked almost flawlessly. This system was not designed to work with Cerner FHIR Server... BUT... because FHIR is actually delivering on Interoperability, by being simple and well defined, the system just worked.

When I presented that I am part of the FHIR standards development team, he wanted me to know how overly excitedly happy he was at this experience. He expressed that he has had long experience with networking standards, including HL7 v2, CDA, and others. He wanted me to know that "FHIR really was [positively] different."

I have no idea what will happen with this pilot. It was not part of the VHA Lighthouse project. It was also not part of the FHIR work going on with MyHealthVet (The project I am now assigned).

Friday, March 2, 2018

FHIR Consent Resource mapping to Kantara Consent Receipt

I really like the work that Kantara is doing with Consent Receipt. I think they are doing what is needed. Specifically they are not trying to define an internal consent resource, nor one that would go from one data controller to another data controller. They are focused on giving the Individual something (a receipt) that is evidence of the Consent Ceremony, and contains the terms agreed to. In this way, the Individual has evidence that can be used later when their consent terms have been violated. Much like a retail receipt is used by a consumer when the thing they bought turns out to be broken or defective.

The diagram here is the Kantara Consent Receipt

Perspective difference between FHIR and Kantara: 

The FHIR Consent is shown here

The Kantara Consent Receipt is intended to be a self-contained message, where the FHIR Consent is one Resource to be used within a FHIR infrastructure.   The FHIR Consent is just focused on the consent specifics.

Thus to create a complete equivalent one would need to assemble from FHIR:

Bundle { MessageHeader(1..1), Consent (1..1), Provenance(1..1)}

  • Bundle assemblies everything together
  • MessageHeader explains to whom the message is going, and from who it originates
    • I am assuming a pushed message, but FHIR clearly can be used RESTfully, or a Document could be created.
  • Provenance carries the proof of origination (signatures)
  • Consent specifics

Mapping FHIR Consent to Kantara Consent Receipt.

FHIR ConsentKantara Consent Receipt
    identifier4.3.5 Consent Receipt ID
    status(N/A - would be active)
    scope(N/A - would be fixed at privacy-consent)
    category4.5.5 Consent Type
    patient4.4.1 PII Principal ID
    dateTime4.3.3 Consent Timestamp
    performer4.4.3 PII Controller
4.4.5 PII Controller Contact
4.4.6 PII Controller Contact
4.4.6 PII Controller Address
4.4.7 PII Controller Email
4.4.8 PII Controller Phone
4.4.9 PII Controller URL
4.4.4 On Behalf
    organization4.4.3 piiControllers (including all contact information)
    source[x]4.7 Presentation and Delivery
        authority4.3.2 Jurisdiction
    policyRule4.4.10 Privacy Policy
    provision4.5.1 Services
        period4.5.9 Termination 
            role4.5.10 Third Party Disclosure
            reference4.5.11 Third Party Name
        action4.5.2 Service
        securityLabel4.5.12 Sensitive PII
4.5.13 Sensitive PII Category
        purpose4.5.3 purposes
4.5.4 Purpose
4.5.5 Purpose Category
4.5.8 Primary Purpose
        class4.5.7 PII Categories
        code4.5.7 PII Categories

Not well mapped:

I am pleased and very surprised at how well these map. The following items are where there was differences. These differences seem reasonable given the purpose of each, and capabilities of the environments.

The following items from the Kantara Consent Receipt do map, but not perfectly.
  • 4.3.4 Collection Method - a description of the method by which consent was obtained
    • for FHIR, the current presumption is that the data is collected during treating the patient for healthcare reasons. This current presumption is likely not going to be true as FHIR matues
  • 4.5.8 Primary Purpose -- indicates if a purpose is part of a core service of the PII controller
    • Seems to be a way to differentiate primary purpose from secondary. 
    • FHIR Consent addresses purpose of use regardless of primary or secondary
  • 4.5.9 Termination - conditions for the termination of consent. link to policy defining how consent or purpose is terminated.
    • FHIR Consent has timeframe to automatically terminate, but does not address how the patient would take action
There are a few additional capabilities of the FHIR Consent that are not yet represented in Kantara
  • verification -- these elements are there to hold who verified the consent ceremony. I am not convinced that this is commonly needed. 
  • dataPeriod -- often a patient is willing to allow some data to flow, but might want to block data from a specifically sensitive period of time. The timeframe is an easy thing to identify, and to enforce.
  • data -- FHIR we can point at exactly which data is controlled by this rule
  • nested provisions -- FHIR Consent can defined nested provisions. Thus enable this, but not that...

Thursday, March 1, 2018

Big audit entries

The ATNA audit scheme has been re-imagined in FHIR as the AuditEvent Resource.
The reformatting is only to meet the FHIR audience expectations for readability. For this there is really useful datatypes, structure, referencing, and tooling. There is no intention to change in any fundamental way. There is a mapping between the two that is expected to translate forward and backward without loss of data. The reality is there might be some cases where the mapping might be lacking....

Small entries are large

One of the observations many make about ATNA and AuditEvent is that the schema itself makes what could be recorded in classic log file using a simple unstructured string of about 115 character. The following example comes from the examples in the FHIR AuditEvent for an Accounting of Disclosure Log Entry,
Disclosure by some idiot, for marketing reasons, to places unknown, of a Poor Sap, data about Everything important.
becomes a 4604 character XML object  or a 4156 character JSON object (Hmm, json is smaller, but not by much).

THIS is a ridiculous example, as the string clearly is not sufficient, but the point I do want to make is that adding structure will make the space needed to be larger.

This is a tradeoff that is simply a fact of the difference between unstructured strings, and a structured and coded object. The string might be useful, but often needs special processing to get the data embedded in that string. More often in a string world, on an log analysis must correlate many log entries to get the full story.

The point of ATNA and AuditEvent is that the original record knew exactly the values of Who, What, Where, When, Why, How, etc... so the goal of ATNA and AuditEvent is to provide well defined ways to record this so that it doesn't need to be guessed at.

So reality is that an ATNA or AuditEvent is likely larger than a string... but most 'happy path' audit log entries are 1-2 k in size. Not small, but also not big.

Big log entries

The problem is that there are occasionally cases, failure-modes, where more information would be useful to be recorded. Such as when there is a technical failure, one might want to record the 'stack trace'. Or when a request is rejected, one might want to record more fully the request details and response error message. 

Or some want to record the results of a Query, something I caution against as it fills the audit log with data that is easily re-created.  Often these results are saved in other databases locally, so in that case just link the AuditEvent with that database entry. This could be done by just putting a database index into a AuditEvent.entity.

So sometimes there is a need to record a big amount of data along with your audit log entry... so, how should this need be handled?

FHIR offers an interesting solution. The Binary resource. That is to say you put the big blob into a Binary, and have the AuditEvent point at that Binary. There is an additional feature of Binary that is useful to identify the security that should be applied to this Binary instance, the Binary.securityContext can point at the AuditEvent instance.

More about FHIR and Audit Logging

Wednesday, February 21, 2018

Maturing FHIR Connectathon without confusing the marketplace

Grahame, being the fantastic Product Manager for FHIR that he is, is asking the FHIR community for input on how FHIR Connectathon should evolve. I started to write a few lines but realized that I had more to say than a few lines. (yeah, I know... blah blah blah)

IHE has been doing Connectathons for almost 20 years (First was in 1999 with Radiology). IHE did NOT invent the concept of Connectathon. I was involved in TCP, IP, UDP, NFS, TELNET, and FTP connectathons back in the late 1980s. They were almost exactly the same kind of events.   I have a detailed article on what a Connectathon is, and is not... please review it - What is Connectathon?  I have also written about how nice it is to see FHIR Connectathon changing.

I think IHE and FHIR need to be as distinct as possible, But clearly there will be overlap. Each holds a unique position today that those of us that are involved in both see clearly. However the outside world finds it hard to differentiate already. This perspective to the outside world should be seen as a very important factor. If the consumers of our standards and connectathons don't understand the value, or are confused by it; then it is not valuable or clarifying.  

This does not mean that the overlap should be avoided, it should just be deliberate and clearly communicated.  So far, FHIR Connectathon has been more of a 'hackathon', and that has been exactly what FHIR community needed. The value today: very quick (agile) testing of the specification, proving ground for app development ideas, safe place to share ideas and push oneself. A critical part of this success is that it is short (1.5-2 days), very inexpensive (compared to IHE Connectathon), very informal (compared to IHE Connectathon).  These are strengths of FHIR Connectathon today that we should not forget.

The mature part of that FHIR community is ready to move to a new step. I don't think that new step is all the way to what IHE Connectathon does, and certainly far away from certification (which is also what IHE Connectathon does for certain tracks).  The less mature parts of FHIR community do need a less formal place to play, however things like FHIR Dev Days are possibly filling this need?

So, where possible, cooperate with IHE Connectathon. Leverage the same tooling where possible. Leverage the same process and event space where possible.

IHE should focus on multi-standard use-cases, and domain specific use-cases. IHE should focus on end-to-end flows that are documented in a Profile or Implementation Guide. 

FHIR should focus on building block use-cases that use FHIR alone, and generally re-usable use-cases. FHIR Connectathon would be more the place to prototype, to investigate, to develop a concept, to build a consensus.

FHIR Connectathon should continue to advance the complexity of the scenarios, the Integration of small scenarios into larger ones.  Mature the testing of building block scenarios such that they can be held up as complete, something that can be used to do BDD or TDD. A 'standard' modularity beyond what we see today as a 'standard', that is not just the 'encoding', but also testing and block building.

This does not mean FHIR Connectathon doesn't do full end-to-end workflows. Just like it doesn't mean IHE would never do hackathon like things. The overlap will exist, it should just be clear.

Keep our eyes on the Purpose of a Connectathon

To a standards organization, a Connectathon is a way to mature the standard. Both IHE and FHIR have connecathon as a required part of their governance of maturity.

The purpose of a connectathon to a participant is to gain experience interoperating with your potential future partner in a real-world exchange. By focusing on testing in a safe place like a connectathon, one can push the limits of ones own software. The take-away is a confidence that when a customer needs your software to talk to that specific peer, you have high confidence that it will work right away, and if it doesn't then you have experience that guides your reaction including possibly calling on that personal relationship you created at connectathon. 

Formal checkmarks, or certification, are far less valuable than this. Mostly because reality will happen, and that checkmark or certification means nothing when reality isn't working.

Other articles of mine 

Sunday, February 4, 2018

Apple should have a HEART

Apple has re-entered the Healthcare space with their new announcement about support for a person to maintain their health data on their iPhone. There is really nothing technically new, but new or not is not the important bit. What is important is that any visibility given to the Health Data portability problem is good for making changes.

My understanding of what has happened is that Apple has moved from their own proprietary API support, to support for Argonaut defined APIs. These Argonaut defined APIs would qualify as a 'standard', they are based on #FHIR at an older version - DSTU2. So their adoption of a standard API is big. It is not hard, many have done exactly this. But it is big because it is Apple; and with Apple we get marketing of the usefulness of the concept, and we get a motivation for Providers to support the Argonaut API.

The bad news is that this is DSTU2, and that brings a risk that these APIs will be frozen at a non-Normative version of FHIR. I hope this doesn't actually happen. I hope that they evolve as FHIR evolves to Normative. The fact they started with DSTU2, and are ignoring the current STU3, is not good news for this hope of future normative FHIR.

Consumer empowerment aspect

My understanding of what Apple has done is adopt the SMART-on-FHIR security method, and the Sync for Science privacy method. They expect the Patient (their user and iPhone owner) will navigate to each of their supported Healthcare Providers, interact with their portal to give authority to release the records to that iPhone application. This is a model defined as "Sync for Science", a really unfortunate name as the name came from the original scope but the solution is generally useful. 

The benefit for Healthcare Providers is that they manage everything about the identity linkage, they own the username (password) the patient uses at their portal, and they own the linkage from that username to their Patient ID, and they manage the Consent holding the patient authorization to release to a specified and future authenticatable application on the iPhone..

The Healthcare Providers usually mange the Identifiers by sending their known patients a postal mail letter with a username and a one-time-secret. The person logs into their portal, gives the secret, and then proceeds to create the password they want. Once this is done, the Healthcare Provider has confidence they can manage the username/password, and that they know strongly which patient that represents.

The Healthcare Provider manage consents using whatever system they have internally. The consent never needs to be in a standard form, or any specific form or availability beyond what their organization needs. It just needs to utilize OAuth mechanism to bind the instance of the application the patient is using with the patient authorization (consent).

Lastly, because it is a relationship with the Patient themselves, when the Healthcare Provider release the data, they are logically releasing the data to the patient themselves. So no Business Associate concerns.

Apple in this case is just hosting an application, they are also the author of that application. They never need to know the Patient Identity, but they will be given highly sensitive patient data.

Why Apple changes everything?

So why is the fact that Apple is just doing what many applications have done before a big thing?

Apple has a huge number of people in the Apple ecosystem. Therefor the effort that existing Healthcare Providers need to do to support Apple is a better return on investment. Even if one only considers the 'bang for the buck' in terms of the number of that Healthcare Providers patients (bang) for the level of effort to do the work (even if high).  Note this is a motivation for Apple previous architecture that used proprietary API, but use of standards add to scalability.

Apple people trust Apple will keep their information and information about what they do on Apple private. This is unlike other big identity providers like Google, or YAHOO. The Apple people are special in this way, but so is the Apple organization. They have a proven track record (unlike YAHOO) of keeping their data secure, and they have a proven record of not letting their data get mined for advertising opportunities (unlike google).  Therefore the people are less worried that Apple will know what healthcare providers they are seeing. 


So the current solution is absolutely fine. The problem it has is the ability to scale. This is where HEART comes in. HEART is a standard specification, for which I have participated in the standard development, and have blogged about it.

The basic explanation is that HEART leverages OAuth, specifically a configuration called User Managed Access (UMA), to enable an "Authorization Server" that is selected by the Patient to represent Privacy access control decisions according to rules the Patient chooses. Essentially moving the Privacy authorization decision out of the Healthcare Provider. 

This is done by giving high assurance to the Healthcare Provider that the patient has chosen a specific HEART server as their authorization decision service. Thus the Healthcare Provider can trust any PERMIT or DENY decision that authorization decision service (the HEART service) makes for that patient in that circumstance. This enables the Patient to establish rules ONCE, where in the Sync for Science model the Patient must set the rules as many times as there are Healthcare Providers holding data on that Patient. Some patients have a small number of Healthcare Providers, others have many.

Apple should have a HEART!

This is an elegant solution, but it needs some major new player to make it come to life. Enter Apple. The two factors I mention above are critical. Patients trust Apple, and Healthcare Providers like Apple. These two are unique, as I mention above, but that is not enough.

The third factor is critical. Apple knows high quality identity information about their customers. Thus it is more likely that as an Identity provider, they will be able to more accurately, and more authoritatively, build the binding between their Identity (apple ID) and the various Patient Identifiers at the various Healthcare Providers. This patient identity problem is the biggest 'technical' problem in ALL of the Health Information Exchange (HIE) solutions. Binding a realworld identifier with a Patient Identifier in a way that has few false-positives (hopefully zero), few false-negatives (hopefully zero), and can't be abused by malicious actors (authenticatable and traceable).

Further, the Apple ecosystem is a place where some trust can be placed. If there are malicious misuse of the healthcare data exchange, the Apple ecosystem can be used to find the malicious actor. This is to say that there is trust that Apple knows what the Apple user is doing, and can find Bad-Apples. (sorry, had to).


Is it critical that Apple start to build out their HEART solution? No, but it is exciting that there is finally someone that I think could pull it off.

Wednesday, January 31, 2018

FormatCode granularity

I was asked the following question:
Confused as to the granularity required for formatCode.
The HL7 link seem to be at a course level:
but a recent update has format code at a document specific level: 
This links to FHIR and I assume MHD

Any advice?
My response

The FormatCode is there to differentiate 'technical format'. It is a further technical distinction more refined than the mime-type. So it is related to mime-type.

FormatCode is not a replacement for class or type. In fact it is very possible to have the exact same type of content available in multiple formats.
See article: Multiple Formats of same document

Including FHIR Document See article on FHIR Documents in XDS

It is true that IHE defined FormatCodes tend to be one-per-Profile, where as all of C-CDA R2.1 is one FormatCode. This difference in scope seems like a very big difference, but is at the technical level not different. That is to say that the IHE XPHR profile defines a unique set of constraints on the format of the content, where as C-CDA R2.1 similarly defines a unique set of constrains on the format of the content.

This is a good time to explain that what IHE calls a "Profile" is commonly what HL7 would publish as an "Implementation Guide". Thus they are often very similar in purpose.

While it is true that XPHR has only one type (34133-9 Summary of Episode Note), where as there are a set of unique use-cases that are each unique clinical 'type' of document in C-CDA R2.1. This is a good example of why formatCode is not the same thing as 'type'. Type expresses the kind of clinical content, where as FormatCode expresses the technical encoding used.

So the FormatCode focuses on the technical distinction as a sub type on mime-type; and should be as specific as necessary to understand the Profile (or Implementation Guide) set of constraints.

Further questions are welcome.

Saturday, December 30, 2017

HIE Future is Bright - stepping into 2018

This is my overall summary of the Healthcare Standards, Privacy, and Security space. It happens that the framework for explaining why the future is bright for HIE comes from the Wisconsin HIE (WISHIN) fall summit. Note slide decks are now available.  They used the following diagram to show what they viewed as the HIE future. I like it, so will use it here

This is such an exciting perspective of what the Wisconsin HIE delivers today, and where they are targeting for future support. The other slide decks further elaborate on this plan. It is driven by delivering Value, not just Volume.  They had a segment that focused on Care Coordination as a driver of these changes.

I have written articles about each of these transitions and more. Here they are

My Perspective

The big factors as I see it:
  1. The Document model is still very important, even if it is frustrating. This is especially true of historic episodes and visits. These historic events need to have the full context of them, which is what a Document model provides.  
    • The good news is that FHIR has a Document model, and the FHIR Document model has a directly convertible data model
    • A FHIR Document travels an HIE easily
    • CDA will fade, but never disappear. It is just way too hard to get right.
  2. The future will be more about automating the consumption. Up to now we have focused on getting EHR to publish or simply make available the data they have. This first step is critical to Interoperability. We now need to focus more on data consumption, as the way we consume Documents is not optimal or even functional.
    • The good news is that FHIR is a fantastic API for consuming data. So there is a rich opportunity to make the consumption experience better 
    • Enterprise class API  ==>  FHIR API to Document Sharing
    • FHIR has subscription model, to make service-to-service API more efficient and responsive
    • FHIR has ability to bundle and disassemble without conversion or loss
  3. The future will connect the nation. Today there are a small number of very large networks, but there are no linkage between them. So if you happen to live in a part of the country where you need to go to doctors within different networks, then you must manually transfer the data yourself
    • The good news is that there are talks and actions to unite eHEX, CareQuality, CommonWell, and others.
    • Not just technically, but also logically. We need nation wide policies on the use of Vocabulary, Document formats, and Care Planning
  4. The bad news is that Security and Privacy are going to get worse before they get better. This is more a statement of security than Privacy. I don't see any of these future benefits abusing Privacy, but I am worried that Privacy-By-Design isn't ingrained. I am more worried about Security, in that the security model for FHIR is very immature, and the junction between FHIR and the other worlds where the data exists.  This is not to say that OAuth is bad, but rather the healthcare use of OAuth is not mature, and the healthcare needs of OAuth are well beyond what OAuth was designed to do.
    • The good news is that there really is plenty of time in the coming years to work this out. What we need is interested bodies to get involved with open consensus prototyping, trial, documentation, and improvement.

The future might get the Patient more center to their own care. Today the GP drives everything. I don't think they want to, but it is just too hard to do anything else. Some say there is business pressure to keep control within that doctors office, I don't think this is all that true. I think it is more simply too hard. First, most patients are not technically savvy, that is changing. Second, most patients are not feeling well, so it is hard to take leadership. MOST important it takes a community to put the patient at the center, and we don't yet have a connected community.

These will not happen in 2018, I am just predicting they will be the central motivations that will influence change. If they happen, all the better. We must remember that change takes far longer than one expects it should.

Some blog articles I am working on:

  • Direct HISP on FHIR - replacing XCA api with a FHIR api
  • Meaningful Use means IHE PDQm, MHD, QEDm, and mXDE
  • Reverse MHD
Please contact me if you have a topic you would like me to cover.