[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gnumed-devel] Lab import (data) workflow planning
From: |
James Busser |
Subject: |
[Gnumed-devel] Lab import (data) workflow planning |
Date: |
Sat, 02 Feb 2008 23:13:09 -0800 |
I believe we have so far only loosely talked about how the data flow
may need to work.
We may have in the GNUmed EMR, the possibility that we will have on
record (in clin.lab_request) a fk_test_org and a request_id and a
lab_request_id *however* in some locales this will be true for some
but not all patients (think patients on whom the doctor is being
copied) and in other locales (e.g. outside of Germany) there exists
no workflow to generate and pass along and receive back the fields as
used in Germany.
Source files will contain messages that pertain to multiple patients
in one of (I think just) four scenarios:
... persons who already exist in GNUmed and are automagically
matchable i.e. according to configurable rules that define the
adequacy of a match as not needing user verification
... persons who already exist in GNUmed but whose matching requires
user assistance (or, at least, verification)
... patients who do not yet exist in GNUmed but who are appropriate
to create from the data as a new patient)
... patients who do not yet exist in GNUmed who the praxis may not
wish to create (e.g. information sent in error)
regarding this last use case, the praxis may choose to create the
person, even though the person did not receive care, to capture the
communication to the lab about their error
One thing I am wondering is whether the parser (Mirth or Hapi) will
be smart enough to evaluate and distribute, in a single pass through
the source file, every message according to rule-based decisions to
determine different places in the backend into which to write the
information, or whether all messages will need to be imported into a
table, each of whose rows would hold one message in raw data form.
All messages *could* be imported into the table
clin.incoming_data_unmatched
where the auto-matchable records would be migrated out of the table.
This would leave behind those for which an algorithm can suggest a
match, and one other class of message results, which would depend on
a user salvaging a match, or abandoning messages which could then be
moved over to
clin.incoming_data_unmatchable
So here is another question... even if it is decided that Mirth or
Hapi could evaluate the matching rules, and --- for the well-enough-
matched records --- write the results into the clinical tables, we
would still end up with some of the message information going into
incoming_data_unmatched and into incoming_data_unmatchable, and so we
would need a way for some of *that* data to be re-processed after the
identity of the patient had been confirmed or provided. So
essentially, we would need to use the output of a query on the user-
matched records as the input for a post hoc reprocessing of those
messages. So if processing will need to be done from these tables
anyway, is there value having a front-end channel to intercept part
of the data?
- [Gnumed-devel] Lab import (data) workflow planning,
James Busser <=