Recently, Grahame posted a request on his blog requesting feedback on how FHIR represents extensions (http://www.healthintersections.com.au/?p=2467). Currently, all extensions appear under <extension> tags. This makes the wire format hard to read, particularly when there are nested extensions. The worst-case scenario involves using the Basic resource to create a new profile built entirely of extensions, resulting in definitions and instances with deeply nested <extension> tags and lacking of readable names, falling short of FHIR’s goal of “a human-readable wire format for ease of use by developers.”
The original design reason for representing extensions in this way was to enable one master XSD to validate any FHIR XML instance. Validation with the master XSD, however, is not very effective because it only incorporates the base resource definition, and does not validate extensions or check constraints specified by profiles, such as cardinality restrictions, data type restrictions, or fixed values. Since profiles will generally be in use, an instance would ideally be validated based on the profiled resource. Thus, having a single, master XSD might not be a totally compelling reason for the current representation of extensions. Moreover, a number of implementers have chosen to implement JSON only, and then the reasoning shifts dramatically.
In his post, Grahame presents four alternative representations, but dismisses them for various reasons, ultimately expressing his preference for the current syntax. His reasoning seems sound based on the alternatives he considers, but before abandoning the discussion, it might make sense to take a step back and reconsider the requirements, before passing judgment. So here goes:
- The representation MUST support distributed extensibility, whereby multiple parties create different extensions with the same names. HL7 should not put limits on who can create extensions, and given that openness, naming collisions may occur and should be planned for.
- The representation MUST distinguish modifying extensions from normal extensions. The receiver must be able to tell if an unknown extension is a modifying extension (one that may change the meaning of the resource), without reaching out over the internet to find the definition.
- The representation MUST allow the receiver to determine the identity of each extension, by providing a unique URI identifying the extension, or by linking to a structure definition that contains the URI for the extension.
- The representation MUST allow parsing the URI, the value, and the data type of an extension, without needing to reach out over the internet to find the definition of the extension.
- The representation MUST be able to express conformance to multiple profiles, allowing validation by linking to the structure definition(s) that define the extensions and additional constraints defined by profiles.
- The representation MUST be easily readable, even when there are multiple nested extensions.
- The representation SHOULD leverage existing standards, avoid abusing existing standards, and avoid creating a new standard if a similar one already exists.
- The representation SHOULD leverage existing standards for schema validation, such as XSD and schematron.
- The representation SHOULD treat extensions and core elements in the same way, rather than requiring a different type of processing logic for extensions.
Do you agree with these requirements? Stating the requirements provides a rational basis for evaluating proposed representations for extensions. In the next post, I will take a look at the current representation, and each of Grahame’s proposals, in terms of these requirements.
I appreciate the careful formulation here!
> The representation MUST be easily readable, even when there are multiple nested extensions
I wish this were the principle, but in practice it’s hard to characterize this in a binary way (so we can’t really tell if the MUST is satisfied; in my view it’s not, today… so it should probable be a SHOULD).
I agree. It’s a SHOULD. Too subjective for a MUST.
Some design trick could be used – use extensions syntax to define core elements, so core and extensions will use same mechanic 😉
I take the winky emoticon to mean you already know that would move FHIR in the wrong direction, to satisfy the hobgoblin of small minds, aka, consistency.
Do we need new resource in specs – GreatPerson?
A couple of comments:
“The representation MUST be able to express conformance to multiple profiles, allowing validation by linking to the structure definition(s) that define the extensions and additional constraints defined by profiles” – well, you can validate a resource against a structure definition whether it claims to be conformant to it or not. I remain mystified why people believe that if a resource claiming conformance *allows* validation.
“The representation SHOULD treat extensions and core elements in the same way, rather than requiring a different type of processing logic for extensions” – well, maybe. Depends on what you think ‘different’ means.
You missed a major important requirement, based on experience with existing extension mechanisms: the standard industry ways of generating code must produce code that reads and writes all extensions, whether they were defined or not when the code was generated. Operationally, lots of programs can’t use extensions with CDA etc, because they were generated from the schema, and the schema doesn’t describe them. Perhaps you implied this with your requirements, but I think it should be stated explicitly.
Good point, I agree.
As we’ve discussed, code generation could coexists with any format of extensions (just collect all unknown elements into hash-map or array of extensions). Why should this affect data format? On the other side, it would be nice to introduce new elements as extensions, test & proove and then move them transparently into the core.
Pingback: Syntaxes for #FHIR Extensions (Part 2) – Light My FHIR