Blog


Blog

Upgrading MEDITECH and Consolidating our Master Patient Index (MPI)


Consolidating Master Patient Indexes

November 12, 2019                     Written By: Jim Smith, Director of Data Services

As the HCIS landscape shifts, we are often finding ourselves involved in major upgrade and conversion endeavors on behalf of our customers. Whether upgrading to the latest version of MEDITECH, or moving to a completely new platform, our sites are confronted with a host of challenges. While this threatens to be a frustrating process, it also presents unique opportunities to improve inefficient workflows, normalize dictionaries, retire obsolete reporting methods or legacy applications, and ensure that the mistakes made in the current system are not inherited during the upgrade process. Preemptive decisions need to be made, regarding resource allocation and timelines to ensure goals can be accomplished at the appropriate stage of the conversion process. When upgrading MEDITECH, consolidating our Master Patient Index (MPI) should be taken into consideration.

Over time, an MPI database inevitably accrues duplicate records, due to misidentification or workflow errors, and if we hesitate before starting our reconciliation effort, we can easily miss the window of opportunity to move forward with a clean MPI. We all understand the importance of eliminating duplicate records to ensure the data integrity of each patient’s medical history, but there are other considerations. Improved operational efficiency and cost reduction are major factors. Missing this window will complicate the process of reconciling duplicates in the future. For example, when upgrading to a new version of MEDITECH we often need to maintain a historical link with our previous systems. After conversion, any duplicates merged in the new HCIS will also need to be merged in the prior HCIS to maintain the accuracy of this historical link, but if the records are merged prior to conversion, this extra step is unnecessary.

It is important not to minimize the effort involved with the resolution process, as this is the primary reason for missed timelines. There is no fully automated solution that will ensure there is no risk of creating additional errors, such as patients merged in error. Our focus needs to be on finding ways to be as efficient as possible during the resolution process, and on approaching the problem in a way that maximizes the results of our time spent on each task, without creating additional risk.

Our response to the challenges posed by the duplication resolution effort is our MPI MergeIT™ application. MergeIT™ integrates with the existing MEDITECH tools used to reconcile duplicates, to enhance and facilitate the process of accelerated resolution, without the risk of an unintended or erroneous merge. MergeIT™ generates customizable worklists of potential duplicates by comparing demographic data at the database level. This lets us prioritize the most likely duplicates and defer the potential duplicates that will need more investigation to a second phase of the process. When a duplicate is identified, it can be immediately handed off to the MEDITECH merge routine, to eliminate any need for redundant manual entry, and to ensure that all values are merged as intended. As the selection criteria is expanded to the second phase, any potential duplicates that are confirmed to be unique, can be permanently filtered from the identification routine, to ensure we never needlessly investigate the same medical record number combination more than once.

The upgrade process does not have to be daunting, and presents many opportunities, but these opportunities can be easily missed if we don’t take initiative and engage the necessary resources early enough in the process. To ensure we can consolidate our MPI prior to conversion, and that we are not creating additional work for ourselves and inheriting the corruption present in our current system, we need to begin well before the conversion is underway.

REGISTER to attend our MergeIT™ webinar November 19th, 2 pm EST and learn how MergeIT™ can help clean up your Master Patient Index.

To learn about The HCI Solution’s MPI MergeIT™  CLICK HERE.

 



LinkedIn


Blog

Interface Engine Maintenance:

Caring for the Heart of Data Flow


Interface Engine Services

October 23, 2019    Written By: Pedro Jimenez, Director Interface Engine Services

An interface engine can act as the heart of dataflow in an enterprise, as it facilitates the transfer of data from one application to another. As with any system, there are certain things to be mindful of in order to keep dataflow moving in a reliable manner 24 hours a day, 7 days a week. After working with numerous interface engines for over 16 years, I have inevitably run across several “best practices” that have withstood the test of time. Among these are: interface and application endpoint documentation, interface contingency alerting, server health monitoring, and high-availability assurance.

DOCUMENTING INTERFACES

Good documentation provides a map of the interfaces in place. The best integration shops have them, unfortunately this is more an exception than the rule. Good documentation does not need to be complicated. While Visio diagrams are ideal, a spreadsheet can be a great start. Each interface can be assigned a row and given a unique id on a spreadsheet; include the interface name, a brief description and a column with a reference (typically a unique identifier) to the interface(s) it feeds, separated by a comma. You can always add other columns later to indicate the interface’s dataflow source and destination address. Keep it simple, then you can go for greater sophistication later.

INTERFACE ENGINE ALERTING

Unless you have a very small number of interfaces, a dedicated 24 x 7 help desk, or adopt a reactive stance by which you depend on your application users to notify you of delayed data updates; having automated alerting is a must. In a nutshell, automated alerting lets key people in your organization (including vendors) become aware of interface problems before your users notice something is wrong with the timely interfacing of data. Automated alerting allows for interface connectivity, message processing or delivery problems to be reported via a dashboard, email or texting medium. Just about every interface engine has these capabilities built into it. Having a dedicated SMTP email account can be a starting point for automated, real-time alerts that will reach key support team members quickly and proactively.

SERVER HEALTH MONITORING

Alerting does not have to stop at the interface engine. The server in which the interface engine resides requires continuous monitoring. If available, the interface engine’s built-in alerting system may monitor important server resources such as disk storage space, CPU, RAM, network or disk resource utilization and alert if any thresholds are crossed. Other interface engine components such as database servers, should also be monitored and alerted. There are also third-party tools, such as The HCI Solution’s Sentry, that can supplement, if not provide full monitoring support for these vital server monitoring and alerting functions.

CONTINUOUS OPERATION BEST PRACTICES

While alerting provides insights into what is happening now, it should still be considered a reactive component of an interface management regime. A “trending approach” should be adopted as a proactive component of best interface engine management practices, as it can help you spot problems BEFORE they exacerbate into a dataflow stoppage or even data loss.

System resource utilization trends will not only indicate when there is trouble brewing (i.e. dwindling disk space or higher than usual network bandwidth utilization) but can also be of use when it comes to long-term planning. Tracking CPU, RAM, and disk usage trends should be placed alongside interface engine utilization metrics on a daily, weekly and monthly basis with message/transactions as a primary source from which to derive overall data usage bandwidth. A calculated average message size multiplied by the number of messages for each interface can provide a more accurate throughput metric (MB/GB’s per 24 hour period) than total message counts, as message sizes can vary greatly. Data mining interface engine databases or message logs are alternate ways of accomplishing this key task.

BECOMING A SUPER-STAR: STRESS TESTS AND HIGH AVAILABILITY

If you really want to be proactive, run periodic “stress” tests by submitting a batch of messages through a test interface, recording the time it takes for that batch to process and trending the “time to delivery” over time. This is a great way to measure the engine’s message processing performance vis-à-vis the engine’s current workload. Other strategies such as virtual machines and high-availability software solutions such as Microsoft Cluster Server can make it easy to keep the interface engine dataflows running 24 x 7 while operating system patches, hardware or any other events that may bring the engine offline.

CONCLUSION

Similar to your own biological heart, an interface engine is the heart of data flowing throughout your enterprise. Just like frequent exercise, good eating habits and routine monitoring of your body’s vital signs throughout your lifetime can keep your cardiovascular system in top shape, the same applies to interface engines. Understanding what those components are, tracking their performance, and utilizing a regime of software tools and best practices, will keep you running 24 x 7 and avoid the nasty surprises that occur several years down the road when you least expect them.

To learn about The HCI Solution’s Interface Engine Services  CLICK HERE.

 



LinkedIn


The Three W’s of

Archiving Legacy Systems


Archiving

September 19, 2019                         Written By: Ken Hoffman, Owner/President

Who doesn’t love sunsets? Data Centers across the country are packed with old, legacy systems that cost hundreds of millions of dollars to maintain. Many avoid sunsetting because they’re just unsure what to do with the data. We call it eating an elephant, one bite at a time. First it helps to determine What, When, and Where; the three W’s of archiving legacy systems.

1) What do you want to retain? One of the easiest questions to ask, but the most difficult to answer. The most common answer from stakeholders is “all of it”; which isn’t practical. Whether discrete or image, clinical or business office, asking these three questions can help; is the data: Clinically relevant? Legally required? Business relevant?

2) When to start the extraction and conversion? This question is two part; when to start the extracts, pre or post conversion to the new system. In addition, what date range will be extracted? Evaluating historical relevance can be cause for pause. Hospitals often end up not choosing a time frame of extraction because it is difficult to determine.

3) Where do you want to store data? We find most sites will store clinical data in an existing scanning/archiving system via a COLD feed. There might be data you don’t want in your active archiving system but rather in a separate vendor neutral archive (VNA). HCI can provide a VNA should you want separate storage.

Whether you want discrete or image retention, The HCI Solution will walk you through the  three W’s of archiving legacy systems and you can watch the beauty of a sunset with an experienced partner. We’ve helped many sites sunset their legacy systems, saving them millions.

To learn more about Data Archiving and Conversion, check out our Webinar Schedule and register for our Data Archiving & Conversion Webinar.

 



LinkedIn


Engineering Concierge

Your Single Technical Resource


Engineering Concierge

08/30/2019                                             Written By: Cora Hoffman, Sales Associate

We all equate a concierge to a go to person in the hotel we are staying that direct us to a restaurant, pharmacy, or department store when needed. They help us to get where we want to go with the resources needed to accomplish what we want to do in the time we have to do it.

Community hospitals have pressing issues. With interoperability, interface engines, complex reporting, software development, and the use of custom integration to ease workflow burden; community hospitals need more diverse technical resources than ever before to address vastly different functions.

Technical employees can be exceedingly difficult to recruit. Many managers get caught in a repeating cycle of recruiting, vetting, testing, and hiring. With different skill sets needed at different times, you might never have the correct individual employed for those pressing initiatives.

Could a team of virtual experts that possess the myriad of technical requirements and experience be the answer? Would you call on them to help? If you needed 10 hours a month of help at a very competitive rate for one year, would you sign up?

Engineering Concierge is the idea of the future. Gone is the idea of one individual to handle one set of challenges. Engineering Concierge is one point of contact for all technical services, even the most diverse skill set that might not typically be part of your team. As your needs change, let the solution remain the same.

 



LinkedIn


API component of

Promoting Interoperability


07/16/2019                                             Written By: Liz Morgan, HCI Sales Director

There isn’t a week that goes by that we don’t receive a question about the Provider to Patient Exchange or API component of Promoting Interoperability. With the measure carrying a weight of 40 points in the new manner of 2019 scoring methodology and needing to amass 50 points, it is understandably a consideration.

With MEDITECH Greenfield, I was hoping to see some application development so hospitals would have a choice. Unfortunately, the economics of developing applications for patients to choose are quite anemic. Patients should not have to pay for their own health information and almost certainly, are not willing to do so without some other value associated. Even large organizations known for cutting edge development and deep pockets aren’t touching it. Google Health anyone?

When apps are developed, the hospital will have to test the ability of those apps to work in their environment. There is also the scary thought of a hospital being responsible for any breach on an 3rd party app that is promoted. You can check that information out on the HHS website.

MEDITECH does have quite a bit of information on their website. You might want to check it out and check back often.

To me, the key is the wording of the attestation requirements:

Provide Patients Electronic Access to Their Health Information

  • DENOMINATOR: The number of unique patients discharged from an eligible hospital or CAH inpatient or emergency department (POS 21 or 23) during the EHR reporting period.
  • NUMERATOR: The number of patients in the denominator (or patient authorized representative) who are provided timely access to health information to view online, download and transmit to a third party and to access using an application of their choice that is configured to meet the technical specifications of the API in the eligible hospitals or CAHs CEHRT.

Making certain you have the API component of Promoting Interoperability configured and available sets you up for success with this measure. In addition, keep your portal up and rolling!

 



LinkedIn