Thursday, December 23, 2010

S/MIME vs TLS -- Two great solutions for different architectures

John Halamka with help from Dixie Baker wrote about a topic that came up on HIT Standards discussion asking why the Direct Project secure transport couldn't just be TLS.

Arien Malec responded with a good discussion of some issues that were uncovered during some prototyping. I am not convinced that this was fully investigated, but it still is a good read and a well done description of SSL, TLS server authentication and TLS mutual authentication.

My answer is: Security is hard, Trust is hard…

All the posts I have read today are all not looking at the total picture. I recommend one look at the Security and Trust workgroup output (Yes, I am a very active member). I think that the Threat Models are most important here:
Threat Models (WG Consensus Achieved)
Direct Project Security Overview (WG Consensus Achieved)
Certificate Pilot Recommendations Discussion (WG Consensus Achieved)
With TLS, you MUST create a walled garden. Any hole in the wall causes the whole thing to fail. If TLS was used there would be a private e-mail network just for trusted organizations, all organizations would need to be trusted.

TLS is not as care-free – PKI free - as Dixie tries to say it is. The comments about not being able to know if you are connecting to the right place is exactly evidence that PKI is need here. What is not needed with TLS is an out-of-band mechanism to distribute certificates, since TLS includes certificate discovery/distribution inline. But one must have a trust relationship.

It is this distribution of certificates that is the hard part of S/MIME. I send signed email messages to people that I want to be enabled to send me encrypted content. This mechanism of certificate distribution, sending a prior signed message, is the one that is used in practice today. I have tried to argue that any use of NHIN Direct messaging very likely was preceded by conversations that were not patient specific but were related to building a relationship. Between organizations this conversation was about what services one might offer another. These conversations likely included discussions about legal restrictions. These conversations could have easily have transmitted the certificate and setup a trust relationship.

also see: Healthcare Provider Discoverability and building Trust

TLS is great for point-to-point communications, or for communications with trusted-intermediaries. But it fails for end-to-end asynchronous message communications, and is NOT the right approach for e-mail.

Wrapping:
There is a little bit of discussion about 'wrapping'. The wrapping that is being discussed is S/MIME wrapping… that is to take an email message and put it inside another email message… Think Russian dolls… The theory is that if you keep wrapping enough times it must be secure. This can hide the ‘subject:’ line, but doesn’t hide the ‘to:’ or ‘from:’.

Yes one can have a highly intelligent email router that puts false ‘from:’ on the outside email, and somehow uses a trusted-intermediary-endpoint-address (‘to:’). Clearly lots of discovery problems here, or configuration and databases. But if you are going to configure this kind of a system, then we should just use a more capable system.

Scope of NHIN Direct
People keep forgetting that NHIN Direct is a transport for VERY SMALL providers with VERY LITTLE technology. These people only need to communicate with a few ‘others’. We need the NHIN Direct solution to help low-tech people. We needed it to work with little beyond off-the-shelf software, or exactly commercial (or open-source) off-the-shelf.  The NHIN Direct transport should NOT be looked to as a solution for more intricate workflows.

As part of this NHIN Direct has also recommended that email addresses could be departmental or organizational. That is one address that represents a whole department or whole organization. Thus there isn’t a huge number of addresses, one for each organization. I don’t think this is a great solution, as it basically brings us back to organization-to-organization rather than end-to-end; but it does put the policy decision in the hands of those that should make the policy decision.

So, I suggest they keep the scope clear for NHIN Direct… Allow simple e-mail with slightly complicated security… and get working on robust health information exchanges. Use NHIN Direct as a stepping stone, and stop thinking it is a long term solution.

See:  Directed Exchange vs Publish/Discover Exchange and NHIN-Direct Privacy and Security Simplifying Assumptions

Wednesday, December 8, 2010

Healthcare needs to watch and learn from the cascade of security failures

Wikileaks is the big news today. This blog article has NOTHING to do with WikiLeaks, but has everything to do with using the event which are associated with WikiLeaks today as a useful use-case to analyze in the context of Healthcare Security and Privacy.

It scares me that someone might think that exposing PHI on WikiLeaks would be an appropriate way to ‘expose’ a healthcare abuse. As much as the current exposure of the diplomatic cables has resulted in mostly embarrassing gossip, exposing PHI would be far more dramatic. The methods that WikiLeaks seems to use don’t give me any comfort that exposure of PHI might not happen. I hope that it would not. But this is not what I want to cover here.

I want to look at the security failures upstream of what we are seeing on WikiLeaks. I think there are very useful lessons to learn.

Data Classification: I understand that originally the Diplomatic Cables were classified ‘confidential’ or ‘top secret’ at one point. Which was later considered too restrictive, so they were re-classified to ‘secret’. This re-classification clearly was a key to the exposure of these cables, as about 3 million people have ‘secret’ clearance. This re-classification was felt necessary to give more access, which might have been the right thing.

It is my understanding that when data gets re-classified there should be new assessment of the data that might result in some information being blinded. I don’t think that this would have removed the gossip, but do wonder why such a bulk of data was either (a) originally classified wrong, or (b) so easily reclassified without recourse. I want to point out clearly, that I am still strongly for Data Classification and totally agree that there needs to be functionality for re-classification (up or down). It simply seems like sloppy process was used in this case.

What we can learn, is something we are struggling with in healthcare. That Data Classification is a rather blunt instrument, it doesn’t work very well to support fine-grained access controls. But it is a start, and better than no Segmentation.            See: Data Classification - a key vector enabling rich Security and Privacy controls

Access Controls: The re-classification gave access to a broader range of individuals. It is not clear why this was necessary. I will not try to figure out if this specific user should have access to this specific set of data. What I do wonder though is why this user also had the permissions necessary to export the reports, and put them on a non-secured storage. People are very creative, and I suspect this individual was creative enough to have overcome many Access Controls. Military or Diplomatic secrets would seem to need many layers of protections. Either they existed or this individual is more creative than most.

Audit Controls: The main failure that bothers me the most is that either no Security Audit Logs were produced that indicated that someone was viewing/copying THOUSANDS of documents, or that no one was watching the log. Even an automated program could have triggered easily when a hundred documents were viewed from the same place.  See Accountability using ATNA Audit Controls And ATNA and Accounting of Disclosures

There has been a lot of press speculation that all of the documents, starting with the helicopter attack video, have come from the same source, a young U.S. Army intelligence analyst, who has been arrested. If that is the case it looks like access to vast databases of secret U.S. government documents was rather broadly available and access was not reasonably logged. None of the documents released to date have been marked top secret so, maybe, the database had some level of data segregation. But, if news reports are accurate, no log was kept of access to the database or, if such a log exists, it was not regularly reviewed, since suspicion was directed at the analyst by a person outside the U.S. military. More

Security and Privacy are not simple, they require checks and balances.

Friday, December 3, 2010

IHE Security/Privacy primer

Most of the IHE profiles in the security space are all simply pointers to common IT security protocols. The message is that healthcare is not special when it comes to security. That healthcare should just leverage the good work common to all IT. The following is NOT a complete explanation. I assume the reader can use a internet search engine to find resources including the specification themselves using key terms I provide. Clearly official profile text and standard text is the best resource.

ATNA - This is a comprehensive profile that indicates that Access Control, Audit Control, and Network Controls are important
  • Access Control
    • There is no standards pointed to by ATNA here. There is simply a statement that a system needs to have access controls that can be shown to a local authority are sufficient to protect the systems resources according to the local policies
    • If this can't be demonstrated, then the system should not be authorized to be used. Specifically it should not be given a network identity (see network controls)
  • Audit Controls
    • Specifically Security Audit logging
    • This is a very thin profiling of the commonly used SYSLOG protocols for communicating that an auditable event has occurred
      • There is statements that relax size limits
    • This is a FULL PROFILE of the content of that auditable event into an XML encoded message
    • There is a set of 'auditable events'. That is events that when they happen an audit log should capture what happens
    • The audit log message is fully describing: Who, What, Where, When, How, and Why.
    • This requires that the audit event time be kept consistent. Which is why the CT profile exists. The CT profile says use common NTP protocol to keep clocks synchronized to about 1 second, which is generally good enough for security audit logs
    • See Accountability using ATNA Audit Controls And ATNA and Accounting of Disclosures
  • Network Controls
    • There are a set of standards specific to the type of network communications. They all address three aspects of network communications of protected information. Not all network communications are protected, for example DNS is not generally needing to be protected as it is not communicating protected information (mostly)
    • Authentication of both sides of all communications that include protected information
      • This is consistently done using machine/service-end-point identities using public/private keys (certificates)
      • Certificates can be directly trusted. One-by-one
      • Certificates can be trusted because they 'chain' to a known authority (certificate) that is trusted directly
      • Caution: the method used on the internet with SSL certificates is technically the same, but not administratively the same. In the internet browser world the 'known authorities that are trusted' are authorities that your internet browser decides should be trusted.
      • ATNA wants a deliberate decision of trust to be done, not a decision by a vendor.
    • Encryption of the communications that include protected information
      • AES is the protocol of choice. This choice does NOT mean that other protocols can't be used. It only means that at least AES needs to be present. This is to assure interoperability. This is NOT to constrain algorithms
    • Integrity control of the communications that include protected information
      • SHA1 is the protocol of choice. This choice does NOT mean that other protocols can't be used. It only means that at least SHA1 needs to be present. This is to assure interoperability. This is NOT to constrain algorithms
      • Note also that the communications is transitory, not long-term-storage, so the use of SHA1 is acceptable. SHA1 tends to not be acceptable for long term signature (See DSG below)
    • TLS is the general mechanism when no other mechanism is used. TLS is the 'standardized' protocol that is more commonly called SSL. (mostly)
      • IHE profiles that BOTH sides of the communication need to be authenticated
      • Whereas common SSL only authenticates the server. This is not sufficient as it allows any client to connect. In healthcare we want to know where the data is coming from and where it is going. Not just the server
    • S/MIME is used for e-Mail
    • WS-Security mechanisms are allowed for Web-Services that want to use end-to-end security. Note that none of the IHE profiles leverage this as they are all profiles defining endpoints that need access. But this is specified because there may be deployments where someone puts in the middle of an IHE transaction some form of intermediary that needs partial access
    See Designing a Secure HIE

PWP - This is a very thin profile that simply says that for user directory, use LDAP
  • LDAP is a protocol for doing queries of a directory. That directory might be a database or may be specifically a directory
  • The schema for describing a user is based 99% on the common user
  • The little addition to the schema is really not that important
  • This profile also describes how to find the directory. Using a special DNS record that points at the LDAP directory
  • There are pointers to how LDAP can be secured using TLS.
  • There are pointers to how LDAP can be used to authenticate a user, through the LDAP security mechanisms
  • LDAP is the standard that Windows ActiveDirectory uses for accessing directory entries
  • Healthcare Provider Directories (HPD) is a profile leveraging LDAP as well for external use

EUA - Very thin profile that simply says to use Kerberos protocol for safely authenticating users
  • Kerberos is a pluggable protocol, but really is only used for username/password
  • Kerberos is the standard that Windows ActiveDirectory uses for authenticating users.
  • Kerberos is the standard that Windows uses at windows login (with minor Microsoft extensions)
  • Kerberos is very common in Unix as it was invented at Berkely
  • Kerberos is really good for within an organization, but has real problems that prevent it from being useful on the internet
  • There are also Kerberos ways to pass authentication 'tickets' between an application and server
  • See: Kerberos required in 2011 then forbidden in 2013

XUA - Very thin profile that simply says to use SAML Identity Assertions for authenticating users on Cross-Enterprise transactions
  • This is the solution for the space where Kerberos doesn't work well
  • SAML is both a standard for an XML way to describe a user and provide trust mechanisms of that data; and also a protocol
  • The protocol is not part of the XUA profile. It is ok, but not as important as the assertion
  • There are other protocols that are more common, WS-Trust is one.
  • There is also good reason for a product that does its own user authentication to simply create SAML assertions w/o protocol
  • See - Federated ID is not a universal IDand IHE ITI XUA++ - Trial Implementation

DSG - A profile of XML-Digital Signatures to provide long-term signature across a 'document'
  • Can sign any document type. Not limited to XML type documents such as CDA
  • Signature document is created that is an XML-Digital Signature blob
    • Original document to be signed is signed by 'reference'.
      • Encapsulation is not a bad thing, but it does make the original document harder to get at. Especially if that original document is not XML based
      • Signature by reference allows the original document to continue to be accessed normally by applications that don't need to validate he signature. While having the signature present for those applications that do need it.
    • The digital signature includes the Date/Time of the signature. Assuming trustable date/time
    • The digital signature includes the certificate of the signer
      • Note that signatures need to be valid for decades. Which brings up interesting certificate management issues not addressed.
    • The digital signature includes the reason for the signature. Why was it signed? What does the signature mean?
  • XDS Metadata shows that the signature document is in a 'signs' relationship to the original document
    • This allows for finding the signature from a document, and finding the document from the signature
  • Works for XDS, XDM, XDR, XCA
    • Might work for other environments as well. The main thing that must happen is for there to be a way to dereference the document ID number found in the digital signature document to get to the document that is being signed.
  • Might be future work to have an encapsulated flavor
  • See: Signing CDA Documents

BPPC - A profile of a document that represents a patient agreeing to a privacy policy (e.g. Consent)
  • Uses CDA to capture that the patient has agreed to a policy by reference
    • The policy is not actually ever defined. This is because there is not sufficient maturity to any standard for encoding of privacy policies. Therefore BPPC assumes that someone can write the policy in human readable language and give that a unique identifier (OID). This OID is used as a reference.
  • Supports the consent being digitally signed with DSG
  • Supports encapsulation of a scanned image of something (e.g. an ink on paper signature) using the XDS-SD profile
  • Supports time limited consents
  • Supports both positive policies (OPT-IN) and negative (e.g. OPT-OUT)
  • Recognizes that the absence of an instance of a policy means that some other policy is in place. Commonly called 'implied consent'.
  • Defined for XDS, XDM, XDR, XCA - and may work for other environments
  • See Stepping stones for Privacy Consent

ENC - new profile being worked on this year - Encryption of documents and/or XDM
  • Because this is under development the details are yet to be written
  • Document encryption has favor as it would be transport agnostic, but is unclear the usefulness of this for long-term-storage usecases like XDS and XCA.
  • XDM encryption would likely leverage the e-Mail option that exists today
    • The e-Mail option uses S/MIME to secure the ZIP of the XDM file-system
    • The modification from existing profile would be to explain how to save the S/MIME message as a file rather than delivering it over SMTP
    • This file would simply be a S/MIME message, thus protected with whatever the S/MIME protections used.

confidentialityCode - this is NOT a profile, but is a security/privacy concept built into almost all of the healthcare standards.

De-Identification handbook - this is NOT a profile, but is a document being written this year.

Other

Secure Design
  • All this assumes that your system is secure by design. This includes limiting opportunities for bad guys.
  • Risk based approach should be used to assure that all risks are managed appropriately
  • Unnecessary services are disabled, security patches applied,
  • Multiple layers of defense are in place both within and in the network
  • Etc.

Tuesday, November 30, 2010

The Direct Project and IHE/HIMSS

"The Direct Project", formerly known as NHIN Direct, has officially changed their name, the specifications are maturing, the reference implementations are maturing, and the discussion is turning to the topic of where will the output of the Direct Project land and be maintained. I think that IHE is a very logical place and a place that I find many synergies between what the NHIN Direct project did and what an IHE US Realm would do.

The Direct Project (NHIN Direct) specifications are almost completely aligned with the IHE XDM – E-Mail option. They have simply added that sending a single document without the XDM wrapping is allowed. This is the kind of thing that I would expect a realm specific workgroup to acknowledge.

The Direct Project (NHIN Direct) have done some analysis of the IHE specifications and have provided feedback to the IHE ITI committee their recommendations on both XDM and XDR profiles. This analysis is done in the context of the the direct environment and intended audience. Most importantly this analysis was done in the context of a set of Privacy, Security, and Operational policies. Again, this is the kind of thing that a US Realm committee would be expected to do. For example: They have observed that the XDR and XDM metadata rules are too onerous for the direct PUSH use-cases, especially when applied to some specific use-cases that the IHE ITI committee did not looked at yet. The Direct Project also looked at the interaction between XDM and XDR at a gateway service and applied privacy policy in such a way as to question why XDR has integrated sender and recipient address inside the more sensitive metadata.

The Direct Project group is now struggling with Testing. They don’t have experts in writing test plans, test procedures, or test tools. The closest thing they do have is experts on unit-testing that comes along with the source code of their “reference implementation”. I think this is also a good place for a US Realm to focus a local Connectathon toward the local realm use-cases, and tie this to the local demonstration project (e.g. HIMSS Interoperability Demonstration). For this the IHE expertiese in test tools and testing events like Connectathon would be very valuable.

The Direct Project does have a component that IHE does not have today, this is the open-source reference implementation.  IHE does come close with the open-source Registry from NIST; but this was more of an individual effort (one that needs to be recognized often), rather than a formal part of specification approval. I think there is an opportunity for IHE to extend a welcoming hand to this style of specification validation.

The Direct Project also has the Implementation Geographies. It is not clear how well this will workout inside the IHE context. I think it is something unique to the NHIN Direct project at this time due to the style of solution vs the ‘pure interoperability specification’ that IHE tends to focus on. I know that there might be concerns with getting too close to implementations, thus presenting a conflict-of-interest.

The Direct Project is an excellent example of what a region can and should do with the IHE specifications. They deviated only slightly, and when they did they provided the feedback to IHE for discussion and iteration. There are some useful lessons to be learned by IHE on how to excite a region and to accelerate implementation through reference implementations and implementation geographies.

Thursday, November 25, 2010

Engage with Grace

If you are reading this on Thanksgiving day in the US, print this out and read with your loved ones.


Really, read it. That is the message.  To learn more, see http://www.engagewithgrace.org/

John

P.S.  I am thankful that there are so many people who care so much for others that they are willing to work in healthcare. Top to bottom, thanks to everyone.

Wednesday, November 24, 2010

IEC 80001 - Risk Assessment to be used when putting a Medical Device onto a Network

The history on IEC 80001 is best summarized by Nick Mankovich, Sr.Dir. Product Security and Privacy, Philips Healthcare, in a white paper he wrote for "Information Security Magazine". I have worked side by side with Nick on this standard and want to give him every credit I can at being the lead on the integration of Security Risks. He was recognized as one of the Information Security magazine Security 7 Award winners.
Less than five years ago, Brian Fitzgerald of the U.S. Food and Drug Administration called together a diverse mix of health care folks to talk about the harm that was being done from poor networking of medical devices in hospitals. His agency had reports of injury and death as a result of improperly connected networked devices. In that first brainstorming meeting of December 2005, there were biomedical engineers, IT professionals, regulatory specialists, medical device risk management specialists, security professionals, and medical device engineering staff. Brian urged us to organize and do something to help the world avoid this harm. To avoid international mismatches and "not invented here" issues in government regulatory authorities, he suggested this be pursued as a global standard. Five years later, we are very close to the final vote on the first international standard to address the Application of Risk Management to IT-networks Incorporating Medical Devices (IEC-80001-1).
It was approved September 24th. Nick continues:
This standard lifts security and privacy risk out of the afterthought category into the mainstream of health care delivery. It does this by building around the principle that decisions in any new device integration project in health care need to be built around some simple concepts. In the parlance of IEC-80001-1, medical IT-network risk management proceeds with a careful examination and understanding of three key properties: (1) safety, (2) effectiveness and (3) data and systems security. By considering all three, we can first "do no harm" while effectively delivering on the organization's health care mission. This is done with careful and explicit treatment of the appropriate level of confidentiality, integrity, and availability.
Of course, today's IT staff and biomedical engineers are skillful at keeping the highest levels of safety and effectiveness. However, with IEC-80001's explicit inclusion of data and systems security breach into its definition of harm, we have paved the way for an open and honest discussion of the C-I-A [Risks to Confidentiality, Integrity, and Availability] impacts of an interconnection project or a network change. It allows a consideration of the harm brought to individuals when confidentiality is threatened and, for the first time, consideration of the harm of privacy compromise is an essential part of the IT, biomedical engineer, caregiver, and compliance discussions.
There are specific requirements on Medical Device vendors that Nick explained in an email to the NEMA workgroup:

For medical device manufacturers, it includes requirements for risk disclosure to the Health Delivery Organization that are word-for-word consistent with the 60601 3rd edition. For the most part, this may evolve into a collection called a “80001 risk disclosure statement” that, for safety and effectiveness, would likely be culled from other places in existing manuals/instructions-for-use. Security risk disclosure may evolve into a “next generation MDS2” consistent with something like that described in the Security TR (see below). The establishment of an 80001-consistent “next generation MDS2” is the subject of a MITA/HIMSS working group that is actively discussing the content.

Further there are some anexes that are being written right now. Contact your ISO or IEC representative to get copies and to comment on these. The next meeting is in March at Best Netherlands immediately adjacent to the JWG3 meetings.

  1. 62A/719/NP Step by step risk management of medical IT-networks; practical applications and examples (Karen Delvecchio of GE Medical)
  2. 62A/720/NP Guidance for the communication of medical device security needs, risks and controls (Nick Mankovich and Brian Fitzgerald of FDA)
  3. 62A/721/NP Guidance for wireless networks (Rick Hampton of Partners Med, Boston) 

Thursday, November 18, 2010

IHE ITI Work set for 2011

The IT Infrastructure (ITI) domain of Integrating the Healthcare Enterprise (IHE) has started the Development Cycle for 2011. The process starts with the Planning committee evaluating "Proposals". This year the ITI committee had only a few proposals to choose from, so they choose to simply prioritize the list rather than eliminate some proposals.

The Technical committee meet this week to evaluate how hard each one is, including evaluating if it is even feasible. There was spirited debate about each one of them. There are some work items that are simple documentation but complex decisions to be made, where others have been discussed at length that result in convoluted textual changes.



The Technical committee ultimately decided to eliminate one proposed work item from the list given to them by the planning committee. The work item that was removed is the Document Sharing Directory Service - This proposal would have resulted in a way to publish service endpoints for building a Health Information Exchange using the IHE XDS family of profiles. This is a profile of the UDDI directory standard specific to the IHE XDS family services. This work is highly influenced by the NHIN-Exchange experience. The main reason it got killed was due to overall resource balance across the whole committee; there was also concern with the maturity of the standards with evolving new standards.

The resulting Work items for 2011

1 XDS Link/Unlink Support - This proposal will result in a profile that explains how to handle Link and Unlink in an XDS Health Information Exchange. This proposal will impact Health Information Exchange Infrastructure including Patient Identity Management and Registries.
2 Cross-Enterprise Document Workflow (XDW) - This proposal focuses on the Cross-Enterprise Workflow for Document Sharing in support of workflow document and status management. Key elements: Managing workflow specific status with relationship to one or more documents, Tracking the health care entity that change a status associated to a workflow step, and Tracking the past steps of the workflow related to specific clinical event. This is a BASIC profile that real workflows will build upon.
3 XCA Query & Retrieve - This is a proposal to add a new transaction to the XCA environment that would combine an XDS Query and a Retrieve of all documents identified by the Query result. This proposal likely will not be supported in XDS environments. The motivation for this is to make Cross-Community queries easier to process in the infrastructure.There is clear concern about the unintended consequences of adding this profile.
4 XD* Minimal Metadata - This proposal asks for the XDS Metadata requirements to be reevaluated relative to PUSH type transactions (i.e., XDR and XDM). This proposal is inspired by the NHIN-Direct project and will include other changes to XDR proposed by them. 
5 CDA Encryption - This proposal has been shaped and scoped by the Planning committee. It now focuses on adding Encryption capability at the document level. This solution will likely follow the IHE Radiology committee solution in PDI. This is to apply Secure e-Mail (S/MIME) content packaging around the XDM zip file. This results in a portable encrypted package that can be carried on any device or transport. This profile could also create an encrypted container for any document at the document level, similar to the DSG profile.
6 Pseudonymisation and De-Identification in IHE Profiles (Handbook) - This proposal will result in a handbook, a process that IHE domain committees can use, that can be used on domain specific use-cases to develop a de-identification and pseudonymization profile. This work will leverage similar work done in HITSP. This work will look closely at the QRPH white paper written last year on the subject. This handbook will also incorporate the process I have documented in De-Identification is highly contextual

Wednesday, November 3, 2010

Signing CDA Documents

I get asked about signing documents in healthcare every other month. Signatures in the electronic world is an area full of technology solutions, but fought with policy and expense issues.

Understanding the Paper world:
The paper world is full of signatures, and it is very important to understand these signatures in the paper form before we quickly jump to the electronic world. A signature in the paper world is not just ink on paper. There is a ceremony involved in the act of placing a signature on paper. This ceremony is not necessarily a hugely formal thing, but the physical act of being presented with the paper, reading of the text on paper (yes I know no one reads them, but it does imprint an image in the brain), finding a tool to do the signature (pen), and placing the signature onto the paper. This physical act is important, not just at the time that it is done, but also because it places memories into our memory about that ceremony. Thus 10 years later when someone presents us with that ink signature on that paper we have quick recall as to if we did that signature or not.


There is also different levels of signatures. I am sure everyone has an experience where initials have been requested, such as at hotel check-in or when renting a car. These initials are their way to make sure you have participated in the ceremony, and draw your attention to critical parts that you are agreeing to. For example initialing by the daily-rate, conditions of fuel filling, etc. These initials carry less importance than the final full signature at the bottom. This placement at the bottom (or end) is important as it is implying that you are signing everything from the top to the bottom.


Factors to think through:
Often times a signature in the electronic world doesn't think about all of these factors.


1) Is your use-case specific to Digital Signatures, or are you also including Electronic Signatures? Where Electronic Signatures are not using cryptography standards, and often are simply attributes. Digital Signatures use open standards so that the signed 'thing' can be verified using various and non-related tools.
-- I  am only going to talk about real Digital Signatures.

2) What is the goal of the signature? This leads to who/what is attesting to what to whom. Is the signature a transactional signature or a long-term signature. There is a Digital Signature function built into secure transports such as TLS/SSL; but these are only good for covering authenticity and integrity between the sending system and the receiving system. Once the data comes out the other end of the transport tunnel, it is no-longer protected by this transport based signature. Usually no one thinks about transport signature, but it is important to recognize that this is a specific type of signature.


3) How is the signed data to be used vs how the signature is used. There are use-cases for a digital signature across a CDA document, but when looked at closely there is good reason to keep the signature separate. In many cases where the CDA document is used, the signature is not important. This is not because the signature is not important overall, just that for that specific clinical purpose the signature is not important. The signature is there for legal challenge reasons. Back to the paper world, it is common for many people to use a signed document, but most of them don't do any validating of the signature. There is an assumption that the document exists because it is legitimately signed. At best they verify only that a signature is present. This is a good reason to keep the signature independent of the CDA document it-self. Thus the vast majority of uses of the CDA document don't need to be bothered with the signature.

4) What is the reason-for-signature? This is usually something that people forget to think through. There is some reason why the signer is signing the document. It might be because they are the author. It might be because they have over-read it. It might be simply to indicate that they reviewed it. It might be a signature by a software-service indicating only that it is authentic. The reader of a signature may care only about one of these purpose of signature values. They may be only looking for the authors signature. The reason-for-signature should be considered to be part of the signature. This is the main contribution that the IHE Document Digital Signature (DSG) Profile offered, through extension of XAdes. (that and the date/time stamp)

5) Where is the date/time going to come from, and why should the signature validator trust that the date/time is accurate? This is usually a chicken and egg problem; but does need to be considered. Often times the only way to solve this is through policy/procedure. There are technology ways to solve this, for example using a timestamp service that it-self signs the timestamp alone (reason-for-signature = attesting time/date). The signed date/time is only important to someone challenging the higher level signature. This is yet another use-case where having an external signature from the CDA document is helpful.

6) Is the signature only going to be applied to CDA documents? In the IHE Document Digital Signature (DSG) Profile case, they wanted to have a reusable signature that could be applied to any document type. This is an advantage of this model. Having one mechanism that is independent of document type is very helpful. However a drawback of this model is that it is not as useful for a partial signature. So the current IHE DSG profile covers the whole document only. For example the diagram at the left is the layout of the IHE DSG profile for use with a BPPC Consent Acknowledgment document.

7) Will counter signatures and co-signatures going to be needed? Again a functionality supported by IHE DSG.

8) Are partial signatures really needed? The XML-Digital Signature standard is very powerful and likely the right base standard (as IHE DSG used). * Be careful when using the partial-signature. This is academically possible with XML-Digital signature; but there are known problems with this. This does not mean you should not use partial-signature, but you should be very careful to evaluate if partial-signature is really needed. Often times the need for partial signatures can be handled with a good reason-for-signature.


9) There are ways to use XML Digital Signature that are embedded inside the CDA structure. This is done with a transform that excludes the part of the CDA document where the signature is stored, and thus the final digital signature is stored inside the CDA document. This is likely the most fully functional, but also the most complex. I know that there are known issues with partial signatures, so I worry about this in the short term. It is possible that this will eventually become mature enough to rely on.

10) Whole package Signature? S/MIME packaging vs IHE XDM are somewhat equivalent from the outside. But there are implementation maturity issues with MIME-Multipart-Related. Not with the specification, but with the implementations. This is why IHE choose to use ZIP as the packaging mechanism. The additional advantage of XDM is that the XDS-Metadata allows for documents of all kinds to have relationships, metadata, and lifecycle. IHE will likely add a S/MIME wrapper to XDM this year. But it will be encapsulating so that the whole package can be authenticated, encrypted, and verified. So, this will be yet another model for Digital Signatures. Most applicable to cover a point-to-point conversation, even if that conversation goes through a patients hands.


Available Technology solutions
So here is a list of possible technical ways to do a long-term Digital Signatures in healthcare.
  1. XML-Digital Signature value inserted into the structure of the CDA document as an extension
  2. XML-Digital Signature in Encapsulating mode (Where the outer XML is XML-Digital Signature, and where the contained CDA is untouched)
  3. XML-Digital Signature in Detached mode (Where the XML-Digital Signature is it-self a document that signs the CDA document by reference) This is what IHE DSG does.
  4. S/MIME Digital Signature on the the CDA Document and any images that go with it 

There are some other technology that are considered valid Digital Signatures, such as PKCS


I prefer either 2 or 3 methods. I like the IHE DSG model, as it has no impact on those actors that trust the transport and don't feel the need to validate the signature every time the document is used; yet the signature is available when needed.

Digital Signatures are Expensive
Most of what I have covered is readily available and mature Technology. But there is one more piece of Technology that causes Digital Signatures to be not implemented. This is not because the technology is expensive or hard to use. The thing that typically kills Digital Signature projects is the issues around managing the necessary Digital Certificates. It isn't just the Digital Certificate, but also the Private Keys associated with the Public Digital Certificate. I am not going to go into these issues in this article. But it is very important for people to include the expense of certificate management, especially as it relates to long-term digital signatures. Issues around identity verification, certificate issuance, private key protection, certificate expiration, certificate revocation, certificate purpose,  etc.

Food for thought: Validating a Digital Signature 20 years later includes making sure that the Digital Certificate used was valid at the time that the signature was made.


Reference:

Tuesday, November 2, 2010

Re: How Open and Broad Should an Interoperability Project, like the Direct Project, Be?

David Tao asks on the Direct Project BlogHow Open and Broad Should an Interoperability Project, like the Direct Project, Be?


Keith responds with a very measured standards based response that I very much agree with. Leave it up to Standards to have standards on being a standard... Leave it up to an uber-standards-geek to know about it. I am sure the non-standard google helped him out.

The Direct Project (I can't stop reminding everyone that this is the project formerly known as NHIN Direct) is being held up as a Best Practice. I think there have been some really good things that have come out of it, but I really can't say that the 'practice', that is the process used, is an improved or impressive process.


Too many participants, Really?

I would like to understand why some people feel that the Direct Project had too many participants. What evidence leads to this assertion? I was involved from the very beginning and participated in almost all meetings. I never saw a time when shear numbers got in the way.

Well there is one factor where shear numbers did cause trouble, that is the method choosen to run the Direct Project meetings.  It is not uncommon to take attendance at the beginning of a meeting. But the Direct Project experimented with a Process that had good intentions. When an issue was raised by the chair, the process was to poll everyone in order of their company (my sympathy to those with company names beginning with 'A'). This seems like a good idea as it forces people to listen and speak. But it didn't result in discussion, it resulted in disjointed comments. If this is the factor that people are using to assert that the Direct Project had too many participants, then I would far prefer to have a different process than less participation.


A way to deal with too many participants that are not contributing is to leverage the attendance list. There was far too few people in attendance at the meetings. We spent lots of time asking if there was anyone in attendance from XYZ company. The IHE process requires that you must attend in order to have voting privileges. This at least keeps everyone on the same page because they MUST join the calls. I don't know if this is the best solution, but it does work elsewhere.


Normal group behavior
While participating in this group, I didn't notice anything different than any other group. It was very clear to me how this group followed very nicely the four-stage model called Tuckman's Stages for a group. The Forming stage is always painful, the more often one creates totally new groups under new rules the more often the Forming stage wastes peoples time. Bringing in a mature Governance, like Keith mentions, can quicken this phase. I am sure everyone vividly remembers the meeting where we did the Storming, does Redmond come to mind?

Reinvention sometimes happens
My overall observation is that all of the arguments about RESTful vs e-Mail vs XDR... and today we have an open source Reference Model, in two different programming languages, that have successfully implemented ALL THREE!!!!  I am very excited and happy that we have an open-source reference implementation.

Seems to me this is proof positive that we have a clear and chronic case of Not-Invented-Here. Lets please stop the reinvention, and look to the governance established standards and profiling organizations. When reinventing the wheel, my guidance is that it should be circular in shape.

Presentation on overall IHE model for Security and Privacy

I have been asked by multiple people lately for an overview of the Security Model that IHE would apply to a Health Information Exchange. The last time I presented on this was back in 2008

2008 - IHE Webinar Session 10: IT Infrastructure: Security and Privacy  - Security and Privacy (ATNA, EUA, XUA, BPPC, DSG) - John Moehrke, GE Healthcare (.pdf | flash)

 I have been asked by the IHE ITI Committees to refresh this presentation. There has been a few updates since this presentation. Some clarifications about:
I welcome comments and suggestions on what questions you have that the presentation should answer. What parts of the existing presentation communicate well, which parts don't work. This will be a project for me this fall, so stay tuned.

Monday, November 1, 2010

Healthcare Provider Discoverability and building Trust

The HIT Policy - Information Exchange Workgroup - Provider Directory Task Force - committee had a discussion today around Provider Directory. It is still not clear to me what use-cases they are trying to resolve. I have mentioned this in Healthcare Provider Directories -- Let's be Careful. I think they need to identify the use-cases and then prioritize these use-cases relative to how urgent it is that they solve these issues. For example in the case of Lab, there are well established methods. But there is a urgent need in the case of Community or Cross-Community based Provider to Provider referrals. Specifically where there is not a pre-existing relationship. This ad hoc need is more important than re-inventing where a solution is already available.

There is lots of conflating the need to discover
  • an individual healthcare provider, with
  • a healthcare providing organizations, with
  • the Network Services of a healthcare providing organization.

Yes these could all be considered needs driving the abstract need for a Healthcare Provider Directory, but they are not all the same thing. If we really want to expand the

Discovering Healthcare Providers and Healthcare Services:
IHE has a Profile for Healthcare Provider Directory (HPD). This profile does cover the first two groups of use-cases as outlined in Healthcare Provider Directories. This is the list of use-cases included in the IHE profile.
  • Yellow Pages Lookup: A patient is referred to an endocrinology specialist for an urgent lab test. The referring physician needs to get the contact data of close-by endocrinologists in order to ask whether one of them can perform this test in their own lab. The patient prefers a female endocrinologist who can converse in Spanish regarding medical information.
  • Identification in planning for events: Emergency response planning requires the identification of potential providers who can assist in an emergency. Providers must meet specific credentialing criteria and must be located within a reasonable distance of the emergency event.
  • Provider Authorization and lookup during an emergency event: During Hurricane Katrina, health care volunteers were turned away from disaster sites because there was no means available to verify their credentials. At an emergency site, the Provider Information Directory
  • Forwarding of Referral Documents to a Hospital : A PCP refers a patient to the Hospital for admission. The PCP needs to send various documentation to the Hospital to be part of their EHR when the patient arrives. The PCP needs to identify the Hospital's electronic address such as email or service end point where the patient's documentation should be sent.
  • Keeping agency provider information current: A German government agency dealing with healthcare services for its constituents wishes to keep its agencies healthcare provider information current. The agency determines that it will use the Provider Information Directory to access the most current provider information. The German agency only requires a subset of the Provider Information Directory available information. On a regular basis, the Provider Information Directory provides to the agency a list of the updated information needed.
  • Providing Personal Health records to a new Primary Care Physician: An individual has changed health plans. As a result that individual must change his Primary Care Physician. The individual has a Personal Health Record and would like to provide that information to his new Primary Care Physician. The individual needs to determine where to have the Personal Health Record transmitted to.
  • Certificate Retrieval: National regulations in many European countries require that an electronically transmitted doctor's letter be encrypted in a way that only the identified receiver is able to decrypt. In order to encrypt the letter, the sender has to discover the encryption certificate of the receiver.
  • Language Retrieval: An individual who only speaks Italian requires healthcare services at an Outpatient Clinic. That individual would like to be able to communicate with the Clinic personnel, if at all possible. The individual or his caregiver needs to determine which clinic supports Italian and provides the service that is required.
Discovering Network Services:
IHE has another Profile under development for the third issue. This profile is highly influenced by the experience from the NHIN Exchange project, as well as many Health Information Exchanges. This profile is still under development so there is not much I can point at. This profile will not be using the same technology as the HPD, but will have appropriate linkage between them. The profile has very different uses and very different information needs. This profile will be constraining UDDI, as that is the standard used for services to lookup other endpoints.

Discovered, but not Trusted:
However just having a Directory entry does not mean you have trust. I often hear people discussing the scenario where a doctor needs to send something to someone else who they have never dealt with before. And in this case they want the doctors system to automatically discover the other system, attach securely, and communicate PHI to it. I don't find this use-case to be very reasonable.  Even in the case where there is regulatory oversight that all individuals found in the directory are fully compliant to very strict requirements, far more strict than HIPAA. This is because sending PHI to someone else is more than just assuring that the endpoint is secure. There is also business relationships that need to be built, including agreements by that endpoint to act on the package.

This is true both of cases like NHIN Direct (The Direct Project), as well as Health Information Exchanges (Directed Exchange vs Publish/Discover Exchange).

NHIN Direct might be less of a problem because their use-cases are primarily manually controlled cases of delivering one package at a time, however this limit could easy go away with automation. The "Trust" issue for NHIN Direct is embedded in the NHIN-Direct Privacy and Security Simplifying Assumptions. Having these embedded in preconditions does not mean they are trivial or easily automated. These are in preconditions because they are actually hard and not easy to automate. I think that it is far more normal that the Doctor will make phone calls prior to sending PHI, or prior to asking for PHI to be sent. These phone calls are very critical validation that the two parties do indeed intend to work together for this Patient. These phone calls will likely result in at least a gentlemen's agreement in not a fully signed agreement.

NHIN Exchange has an extensive process for 'onboarding'. This is not a trivial process and covers many levels of checks and balances. What is important to take away is that the process is not just about the technology. The technology is used to enable the process. The technology is used to certify that an organization has achieved validation, and is also used to indicate that this validation has expired or been revoked. Specifically the Digital Certificates used to 'secure' all communications are this technology. The issuance of a Digital Certificate from the NHIN Exchange authorized "Certificate Authority" is only achieved when the system has been validated against the onboarding process. If ever that system is determined to be compromised this certificate can be revoked. So, clearly technology (Digital Certificates and Public Key Infrastructure) can be a critical part of building trust. But this trust is built prior to technology being engaged.

Building trust is hard, and keeping trust is sometimes harder. Technology can help, but there is so much more to it.