Monday, April 25, 2011

HIT Standards - PCAST analysis against Standards

I missed the HIT Standards committee meeting last week, but when I looked at the presentations and the minutes I was EXCITED. They took my advice, well they have done what I really wanted them to do. They extracted the PCAST principles and evaluated the current standards against them. Well, they contracted out to Mitre, who did a really good job.

However I do have some corrections or additional insight that I would like to provide the committee members, Mitre, and my blog readers.

Page 4: I really hope that when they say "Bias towards false negatives", they really mean "Bias AWAY FROM false negatives". A false negative will result in an inappropriate disclosure. It is true that the user should notice when they are working with the wrong patient, but not all use-cases have a user interact with whole documents. I would hope that the Doctor never needs to 'browse', but rather has the data gathered, collated and analyzed for her.

I know that we have an aversion to creating federal patient identities. I think this is misguided. I want to see real discussion on the reasons for and against. One thing that we MUST do is make sure that what ever identity that is given is simply an identity. These identities should be PUBLIC. If they are designed to be public, then we will not improperly use them as a secret. If we had this, we would NOT NEED to have all that other demographics such as Address, Phone Numbers, Zip Code...

Page 5: And I would suggest that although the IHE XDS Metadata contains the values that Mitre considered critical Patient Matching; that the more important thing is that IHE has the service interfaces for Lookup (PDQ), Cross-Reference (PIX), Updating (PAM), and Community Discovery (XCPD). The other alternatives don't have these service interfaces.



Page 6 & 7: Provenance is nicely introduced although the conclusion "Start with shallow provenance, increase over time" is a great conclusion, there is so much that is not said by this statement. Provenance is a very tough topic especially when there are so many different documents to cover, this is why IHE duplicates some of the provenance aspects found in CDA and puts them into the XDS metadata (including XDM, XDR, and XCA). What this does is allow for some level of provenance to be maintained in the XDS Metadata even when the document is a simple text document, PDF, or other.

So I would say that the assessment for both XDS as well as CDA on Chain-Of-Custody are wrong. For XDS it is simplistic, but CDA includes the recording of the custodian of the original electronic document (the originator of the information), as well as the source of it (through the informant participation), and if you wanted to go further, you could.  The exact way to do it simply isn’t specified, mostly because nobody have ever had a convincing use case for it.

Each CDA document is documentation of a specific encounter, by including both author and informant, it allows the chain of custody of information to be discovered.  Information going into the record does so only after a healthcare provider makes a clinical judgment that the information is both accurate and relevant (needs to be recorded).  It actually becomes a new piece of information:  Provider X (the author) thinks that this Act should be documented as part of the encounter.  The information may have been in fact received from multiple sources (other providers, the patient, a family member, a lab result, et cetera).  What is the chain of custody for a diagnosis and how is it intended to be used?  Is it to discover the source of errors in transmission of information?  Is it to deal with redisclosures?  These are interesting issues to discuss.  I don’t necessarily accept it as a requirement in the communication, but I can accept a requirement that it be discoverable.

In CDA, if you want to record credentials associated with an Information Recipient in CDA, you can certainly do so.  The element contains which allows any number of license identifiers to be recorded, including NPI, state license identifier, DEA number, et cetera.

And to add cryptographic assurances one simply Digitally Signs each document using the IHE DSG profile

Slide 8: We then get to Consent, I like their diagram  where we see that there is "Data Elements with Content Metadata". What is not shown and gets confused is that this Metadata in this case includes a set of metadata that would enable the "Policy: Assembled from Modular Components" to act upon. That is that it is the Metadata tags on the Clinical data that is used by the Policy. There is also conflating of the security/privacy context of a request to access the clinical data, which is the domain of the security context. Thus there are three concepts that need to be analyzed against existing standards.  

Slide 9: They do actually show the three concepts, they just lump them against one standard. The items they have identified as "Request Metadata" are the characteristics of the User Context that are provided by system requesting access to some clinical information. This information is is NOT included in BPPC, but is FULLY included in XUA

Then then include the "Content Metadata", which I will agree is NOT covered by BPPC, because this is metadata on all the OTHER documents, so it belongs to those documents. The Content:Datatype, and Content:Sensitivity are fully supported by the XDS metadata; thus are fully available regardless of the document type or transport type (XDS, XCA, XDM, XDR). Further these two metadata values are common in DICOM transports and available for HL7 transports; ALL with common vocabulary. The third value "Content:Coverage" is something that is very much not the domain of clinical documents, although I do agree that CDA covers it. 

The last value is the domain of Consent (and authorization), actually there is far more that should be listed here including Duration of the Consent (needed by many states), Organizations that are a party to the consent (needed for authorizations too), Digital Signature of the Consent, ability to capture the signing ceremony. These are covered by BPPC, even if complex consent rules are not covered.

Conclusion:
I am very excited that this work was done, and I applaud the great work of the Mitre team. I challenge them to take this information and enhance the work. Especially on the Consent portion. I realize that higher-executives want you to speak of these as one thing, but they need to understand the separation of these will create a more powerful solution, and does not diminish the analysis. The last observation it seems while looking across all the solutions analyzed that there is a tendency to say "N" when the standard does not mandate the capability. It is unusual for a standard to mandate something, what should be assessed is if the standard supports through specifying the mechanism. Mandates are the domain of Policy, not Technology.

So, Please look at:

There are others that can be used including DSG, XDS-SD, SVS, etc...

Friday, April 22, 2011

Not - ANSI Initiative to Examine Financial Impact and Harm of Breached Patient Information

I reported back in March that I was going to 'jump on' the ANSI and Shared Assessments Launch Initiative to Examine Financial Impact and Harm of Breached Patient Information. Well, I attended what they said would be their kickoff "MEETING", but it was much more of a "WEBINAR". There was no discussion, just well polished presentations by Lawyers, Security-Product Vendors, and Consultants. I suspect this group has been awaken by HHS Breach Notifications and are looking for a way to add opportunities.

The following press release was issued this week: Internet Security Alliance Partners with ANSI and Shared Assessments for Launch of Project on Financial Impact of Breached Protected Health Information.

Optimistically they seem to be taking a holistic view of healthcare, although one question about clinical research left them without an answer. I have heard from some of my peers that they too are worried that this group does not understand Healthcare and the special sensitivities, including how sensitive the data is but also how critical patient safety and treatment is. The stated goals would produce much needed clarity to a market space that doesn’t have much experience with healthcare.

They have a very short timeframe, they want to research and create opinions this summer with a clear press spread in July when they also indicate they will be presenting ‘on the hill’. Because of the holistic view they are not likely to get into details.

This seems to be far more an opportunity for Lawyers, Security-Product Vendors and Consultants to make a case for their business. Thus I am not going to further participate.

Friday, April 15, 2011

Separation of Layers: Security Error Codes

This is going to be a deep technical article. In short: To those profiling a use-case, Please don't combine behaviors of the different architecture layers: that is keep Security separate from Transport separate from Session separate from Application behaviors. 
There are many projects that are looking to use IHE XDR to enable workflows. These projects are building on top of existing NwHIN Exchange and IHE profiles for very specific use-cases. It is not unusual to profile a higher level workflow on another Profile, for example IHE does this them-selves in projects like XDS-I from the IHE Radiology Domain. These projects try really hard to profile everything, which is a good thing. The mistake that often happens is that they start to mix the different OSI layers. That is they over specify application needs on transport or security layer.

XUA is a profile of SAML that is specific to Transactions that use SOAP. So the XUA assertion can be added to  XDR, but also QED, XDS, XCA, XDS-I.b, etc... The big advantage of using XUA (or SAML in general) is that the receiver of the transaction has information that they can use to further enable access controls on the service side. These access controls would either allow the transaction to continue, for which the security layer gets out of the way and lets the next layers to process the request; or could have found some reason to deny the transaction, an error case. 

In all cases of an error due to the SAML assertion must follow the SOAP and WS-Security specifications. This is recognizing that the SAML assertion is part of the security layer and not part of the application layer. 

IHE XUA Profile, covers this in section IHE ITI TF Vol 2b:3.40.4.1.3 "Expected Actions"
3.40.4.1.3 "Expected Actions"
The X-Service Provider shall validate the Identity Assertion by processing the Web-Services
Security header in accordance with the Web-Services Security Standard, and SAML 2.0 
Standard processing rules (e.g., check the digital signature is valid and chains to an X-Identity
Provider that is configured as trusted). If this validation fails, then the grouped Actor's associated
transaction shall return with an error code as described in WS-Security core specification section
12 (Error Handling, using the SOAP Fault mechanism), and the ATNA Audit event for
Authentication Failure shall be recorded according to ATNA rules.

The XUA Profile references the SOAP, WS-Security, and WS-I Basic Security Profile. The SOAP fault is specialized in the WS-Security specification section 12 "Error Handling"

This section explains why faults are not mandated, because policy may choose to provide no response at all so as to be more robust to attacks
The is also emphasized by WS-I Basic Security Profile – R5814

R5814 Where the normal outcome of processing a SECURE_ENVELOPE would have resulted in the transmission of a SOAP Response, but rather a fault is generated instead, a RECEIVER MAY transmit a fault or silently discard the message.

In the WS-Security specification defines that the fault mechanism is to be used.

If a failure is returned to a producer then the failure MUST be reported using the SOAP Fault
mechanism.  The following tables outline the predefined security fault codes.  The "unsupported"
classes of errors are as follows.  …

There are 8 fault values defined, I am not going to list them here as for SAML assertions there is more specialization.  The WS-Security specification has a profile for SAML assertions that further explain a reason for 5 of the fault values.

Reformatted from Table from Section 3.6 "Error Codes"
  • wsse:SecurityTokenUnavailable - A referenced SAML assertion could not be retrieved.
  • wsse:UnsupportedSecurityToken - An assertion contains a element that the receiver does not understand. or The receiver does not support the SAML version of a referenced or included assertion.
  • wsse:FailedCheck - A signature within an assertion or referencing an assertion is invalid.                                                          
  • wsse:InvalidSecurityToken -The issuer of an assertion is not acceptable to the receiver.
  • wsse:UnsupportedSecurityToken - The receiver does not understand the extension schema used in an assertion.
Conclusion

So, any reason for rejecting the transaction because of the content of the WS-Security header, including the SAML assertion, is represented by SOAP Faults. The reason is not further differentiated beyond these reasons is for good-security-practice reasons. First one expects that a good-partner in a transaction does understand the Policies and Requirements; thus any further reason code would not help a good-partner but would clearly help a malicious individual. Note that the service could certainly log in their audit log a very detailed reason for the failure. A good-partner, upon getting a WS-Security fault, can ask for assistance in understanding the requirements and policies using other channels (e.g. Phone). There can also be exceptions to this rule during provisioning, for example during early provisioning of a relationship with a good-partner one could be more expressive, but once the relationship goes into production these expressive errors are turned off. I would recommend against this as it is common for these debugging-modes to be left enabled by accident thus exposing the operational environment. The audit log with phone call is a far more secure mechanism.

Monday, April 11, 2011

SSL is not broken, Browser based PKI is

Trust using Public Key Infrastructure is alive and well; especially when profiled the way that Healthcare is doing.

There are many articles being written lately about how broken SSL is. I assure you that SSL is not broken, but this declaration does not in any way contradict anything said in the other articles. What is wrong is that their Title is wrong. I suspect that each one of them knows that their title is wrong, but recognize that it is much harder to explain "Browser based PKI" than it is to just denigrate all of SSL.

The problem is well defined by others (e.g. The Register: How is SSL hopelessly broken? Let us count the waysSSL And The Future Of AuthenticityHTTPS Is Under Attack Again), so I am not going to rehash the problem. I will however say that what the Browser publishers did (all of then, vendor and open) was produce something that was better than nothing, and likely the best they could have possibly produced at the time. I am not convinced that even today the use-case scope that they are trying to address is possible to do any other way. The scope is simply too big. So, what we have is something that was "Good-Enough", had we waited for "Perfection" the internet would have literally never left the confines of a toy for research.

The good news is that Healthcare standards and regulation efforts have had the benefit of this example of failure. Back in 2004 when IHE was putting together the Audit Trail and Node Authentication (ATNA) profile, we were very careful to take a very strong standard, TLS (The standards version of SSL), and carefully call for the certificates to be either manually managed one-by-one, or managed off of a dedicated Certificate Authority (CA) explicitly for that purpose (See IHE ITI TF Vol 2a:3.19.6.1 Certificate Validation). We spoke about this for a few years and finally realized we needed to capture even more details in a White Paper on Management of Machine Authentication Certificates

Further we recognized that the trust must be bi-directional. It is important that a Server be strongly authenticated, but that server really should have a way that it can tell that a truly authentic client is connecting to it. Thus we defined from the very beginning something that is not done with SSL at all, and rarely done with TLS; require that both sides can mutually authenticate. In this way a rogue system can't probe a server, as that rogue system won't be able to authenticate that it exists. This very step stops many malicious methods.

Healthcare Use-cases:
One of the big advantages we have is that healthcare has much smaller use-case scope. It may seem that universal access to healthcare information globally might be too big of scope, but even at the intergalactic scope this is more controlled than the scope of an Internet Browser. I am not going to say that we have the problem solved, but the very fact that our use-case scope is much smaller makes this more likely.

The Direct Project had many discussions on how to handle Certificates. There were quite a few attempts to oversimplify in ways that would have created a very similar problem as the Internet Browser has. But many
good people kept insisting that we can't re-invent that failed system. I cover this in my blog post on Trusting e-Mail.

The Exchange pattern is different, and I cover some USA initiatives Healthcare use of X.509 and PKI is trust worthy when managed. Although this is a discussion of the USA initiative, it is actually covering ground that is being covered in many places.

Revocation Checking:
When a Private key is exposed, the PKI solution is to indicate that the certificate is revoked. This is done using either a Certificate Revocation List (CRL), or Online Certificate Status Protocol (OCSP). I am not going to go into the technical details of either of these. They are methods of indicating that a certificate that looks otherwise good, is actually not good.

Part of the articles lately is to point out that again the Internet Browsers have implemented a Policy that is potentially not the best policy. Again, I will point out that at the time they didn't really have a choice. The Policy that they implemented is to consider a failure to contact be able to check revocation as an indication that the certificate is likely good. A really security minded person would see this as broken, clearly if you can't get a positive indication that the certificate was NOT revoked, then one must assume that it actually is revoked. A Security minded person would have the system 'fail closed', that is if in doubt don't allow something. The Internet Browsers chose to 'fail open', again because of their scope.

Healthcare has not really address Revocation Checking and what it means to the healthcare use-cases. I have been involved in a handful, and they typically never really come to a conclusion. The terms 'fail open' and 'fail closed' are too narrow for healthcare. We need to think in terms of 'fail safe', that is to look at the use-case and do what is the safe thing. Where safety considers the patient's medical status. Safe might be to 'fail closed', telling the patient to come back tomorrow; but safe might be to 'fail open' because the patient is in great pain and danger.

Revocation must be part of the PKI discussion. Unfortunately there is no simple answer here, unless you consider "fail safe" a simple answer.

Conclusion
Covering an infinite scope, like Internet Browsers must cover, is impossible. When the scope is limited to a specific industry a much more intelligent solution can be achieved. This has happened in other industries. SSL is not the problem, the PKI model is the problem. We must stay alert and make sure that simplification doesn't make a fatal mistake. Don't throw the baby out with the bath water.

Thursday, April 7, 2011

PHR as an equal peer on the HIE

There are many people wondering why the Patient is not involved in the discussions of HIE. I think that the patient is involved, but I am not likely to convince any skeptics. What I am frustrated by is that the PHR vendors, who claim to speak for the Patient (Ill leave that skepticism aside), are sitting around the edge of the room and shouting insults at those trying hard to make something work. A very civil example is one out yesterday from Bill Crounse from Microsoft. Bill points out a very realistic view of how the Patient can collect and make available their health information.


I would rather the PHR vendors come into the open and transparent discussions as someone who wants to be constructive. The HIE model I have always envisioned and worked hard to make concrete in the IHE XDS and XCA profiles is one where the PHR is an equal partner on the HIE.

A fact that is not well publicized by Microsoft is that they were a major force in the creation of XDS.b as their IHE member was the Editor of the supplement throughout the development. So, clearly there was a time when they did participate openly and transparently in the development of the Profiles that they are now ignoring.

Another fact that is not well publicized by Microsoft is that they were a major contributor to PCAST, where they write about a very different model for building a HIE.

By participating as a equal partner on the HIE,  the whole vision that Bill has can happen. And yet for those patients that are NOT CONNECTED their data can still be shared. And support those use-cases that are necessary without the patient agreement (such as legal action).

I challenge the PHR vendors to come into the open. Participate openly in a transparent way. Help us understand why there is concern with the current models. Help us shape the solution. This is a journey, not a destination. Stepping stones are what we need to make concrete while having a good vision of the horizon.

Update 4/8/2011: Bill tells me via Twitter DM that he never got my comment, so I entered it again.

Wednesday, April 6, 2011

Trusting e-Mail

The "The Direct Project" re-branded from "NHIN-Direct Project" - is addressing the use-cases where the very small healthcare clinic that has little or no current Healthcare IT technology can take some initial steps into the Healthcare IT world. I very much support this usecase, as this audience really needs low-tech solutions. Getting this audience to think in terms of electronic documents (text, PDF, CDA, or any other) is more important than getting them convinced on a specific method of exchanging the documents. A component of the Direct Project that has not been totally settled is Trust, that is the 'why' of Trust.

The Direct Project has nice formal "Overview" document. Given that I was a contributor to this document, I think it is a good document. This document is a really good introduction, it is not a technical document, specification, or deployment guide. These other topics are covered elsewhere on The Direct Project wiki

The Direct Project is recommending that Secure E-Mail be used to enable the very small doctor office to communicate Patient information with others. In short, replacing their FAX machine with e-Mail; but e-Mail that is 'secured'.  So, how does e-mail get 'secured'? There are a few ways to do this with the most common being an isolated and/or proprietary messaging system (e.g., the old AOL network). This is clearly not a solution, as it is not standards based nor open. In standards there is PGP and S/MIME (I am not going to spell out the acronyms as it is really not that helpful here). The Direct Project choose to specify S/MIME. This answers the question of the standards to protect the content, but does not address why someone should trust the content.

The technical answer is X.509 Digital Certificates, and Public Key Infrastructure (PKI).  More technology, that I really don't think needs to be fully understood by you, right now. These are technologies that are really good and really mature. They give us a technology equivalent of credit cards. With credit cards there is a known system behind them, there are online ways that merchants can verify they are not stolen. We trust credit cards because we have built up trust in them. They are not all issued by the same company, most are issued by the big credit card agencies, but there are still some that are issued by a local store. The important part is that we as consumers don't know how this all works, but we have come to trust them.

Trust is not easy, I could go deep into the X.509 technical details; But ultimately trust is more of a soft art, it needs to be built and maintained. Trust can be one-on-one, like you and your closest friends; Trust can be built in a close social group, like your church or work; or Trust can be brokered by large institutions. The X.509 system can scale like this as well.

Starting big, the USA has an Infrastructure (PKI) where many federal partners have 'cross-certified'; essentially they have agreed that each-other's way of issuing X.509 Digital Certificates (credit-cards) are good-enough. There is technology behind 'cross-certifying', but I am keeping this blog to softer arts. This cross-certifying allows someone looking at another identity to be sure that it was issued by someone they 'should trust'. Meaning the federal partners have a way to know that the other guy should be trust-able. This cross-certifying has been extended to others. I cover this in more detail elsewhere

This wonderful system says I 'should trust' this identity, but I might not really want to trust them for sending or receiving Patient Data. As a doctor, why would I trust a EVERYONE who has a federal issued identity to send me Patient Data. So, this still means that I need to include some subset that I Trust. I have a number of communications with DoD and VA customers. These certificates chain to the Federal PKI bridge, and I know that they require far more identity assurances than our GE certificate authority. Even with this fact, I do NOT TRUST ALL certificates issued by the Federal PKI bridge. I trust those that I know I should.

In my job as a Healthcare Standards developer, I work with GEs competitors. So I also have signed e-mail conversations with specific individuals in companies like Siemens, Philips, etc. Clearly I do NOT want to trust all of their corporate issued certificates. But I do want to trust their standards developers. I also deal with individual consultants, that use self-signed certificates.

My solution is to leverage the functionality of my email application. When I want to send something securely with someone else, my email application will look into my local directory of trusted X.509 certificates. If it doesn't find one it tells me that it can't send the message securely. If this happens then I send that person a simple email telling them who I am and that I want to send them something securely. I tell my email application to sign this email with my GE issued X.509 Digital Certificate. Eventually I get back an email from this person that is signed by them, thus giving me their X.509 Digital Certificate. Now that I have their X.509 Digital Certificate I can send my original message securely to them.

Trust happened in there. I didn't detail how trust was built, so let me do that now. When anyone receives a signed email, the email application will verify the signature and indicate if it was good, bad, or unverified. Good means that it fully checks out, bad clearly says that there is falsification going on. Unverified means that it checks out, but that the identity is not known. This will happen for any first-contact. If I am not expecting this email, I will likely simply delete it. If I am feeling generous I might read the email to see if there is a reason I should follow up. If I am uncomfortable, I might call the person. It is only after I am comfortable that this is a legitimate email from someone I should trust; then I explicitly tell my email application to trust that certificate.

Today there is not much risk that someone will send me a signed email that I should not trust. I am confident that I can weed these out. The number of individuals that I should trust is likely 100 or less, a rather easily managed number.  I prefer that my X.509 Digital Certificate is NOT published on a white-pages Directory. I am far more worried  that someone will use my publicly available GE issued X.509 Digital Certificate to send me encrypted SPAM, as encrypted email can't be inspected by the GE SPAM filters.

I offer here a way to get started at the 'why' of Trust. Start with the small number of connections that this small provider needs to communicate with. I really don't think that we need a large system of Trust. Like the credit-card industry, starting with the local department-store and building upward is a good roadmap. Setting expectations is the factor we need to teach in awareness. Awareness of what certificates are, what encrypted email is, what signed email is... These tools, just like any new tool needs to be explained to the community. Once they know what the tool is and how it should be used; they can better understand and use it. Encrypted email from someone they have never worked with before is 'unexpected'.