Manage Reporting Workflow with DefinITy

DefinITy Report Tracking & Governance

February 3, 2020                        Written By: Jim Smith, Director of Data Services

In my previous post regarding MPI consolidation, we observed how a conversion/upgrade effort, though an intensive undertaking, presents many opportunities to improve our organization and workflow. One of these opportunities is how we can improve our data governance, specifically the process of evaluating and planning our reporting needs. This is most apparent during a platform change, and that will be the use case I’ll be focusing on, but the same considerations can be applied whether we are in the process of upgrading our HCIS or during the maintenance cycle.

Reporting is central to our day-to-day operations and long-term planning, and over time we accrue hundreds of essential reports, distributed throughout our HCIS and depended on by our multi-disciplined user base. When planning for an upgrade, many of these reports will need to be rewritten in light of enhanced functionality and changes present in the new system. In 2018, our development partner Halifax Health completed their upgrade from MEDITECH Client Server to MEDITECH Expanse. Planning for this upgrade, Halifax needed a way to evaluate the full scope of their accumulated reports and effectively determine the best way to consume the report governance elephant.

We begin by answering a series of questions, based on our particular upgrade effort. Such as: Exactly how many reports do we currently have? How can this number be reduced? Out of these reports, we want to determine relevance, and if relevant, then priority and upgrade requirements. What type of report is it, i.e. NPR Report Writer or Data Repository? Is the application the report references changing platforms (NPR to M-AT)? Does the latest MEDITECH version provide standard functionality that makes the report obsolete? Once these questions have been answered, we can begin to delegate the individual reports to internal or external resources that will be responsible for rebuilding and testing the reports in the new MEDITECH platform.

DefinITy is our report tracking and governance solution that was born out of Halifax Health’s planning and analysis effort. DefinITy facilitates the upgrade process by providing a means to easily track and manage the steps from report discovery and analysis through testing and finalizing the new report version. Existing report metadata and usage criteria are loaded into the DefinITy database, and then assigned an owner, i.e. the user responsible for determining the relevance of the report. Report owners can then manage their list of reports, by canceling reports no longer needed, or confirming the request for an upgrade of the report, adding any additional details or requirements. Requested reports are then assigned to a resource for tracking the completion of the new version of the report.

Sound planning and consideration ensure that we make the most of the opportunities presented to us during an upgrade effort. Using a tool to manage and track the report update process in phases, allows us to focus on the upgrade as a whole while delegating out the details to the correct resources. To assist sites with this process, The HCI Solution offers a complementary report analysis, providing a high-level overview of a site’s current reporting footprint and usage, and highlighting those reports that will be impacted by platform upgrades. We also offer a full suite of Data Services, in the event that you are looking for external resources to assist with your reporting needs, regardless of your MEDITECH version.

Please feel free to check out our no-obligation Request a Quote Form

REGISTER to attend our DefinITy webinar April 7th, 2 pm EST and learn how DefinITy can help manage your reporting workflow.

REGISTER to attend our DefinITy webinar May 19th, 2 pm EST

To learn more about The HCI Solution’s DefinITy  CLICK HERE.



Why SyncSolve® Perfectly Complements MEDITECH's Corporate Management Software (CMS)

Why SyncSolve® Perfectly Complements MEDITECH’s Corporate Management Software (CMS)

Why SyncSolve® Perfectly Complements MEDITECH's Corporate Management Software (CMS)

January 6, 2020                                     Written By: Dan Collins, VP of Operations


Utilizing your Electronic Health Record’s (EHR’s) TEST system is the best way to ensure that issues with dictionary, parameter, or new software changes are discovered prior to introducing patient safety or other issues into your LIVE EHR environment. However, your EHR’s TEST system is only truly effective in catching problems if it behaves the same way as your LIVE EHR. That means putting effort into supporting a proper dictionary management process. Whatever dictionaries are built and edited in the LIVE system need to have the same changes made in the TEST system and vice versa. This is easier said than done. Manually re-keying dictionary edits in two different systems is an arduous process filled with opportunities to introduce human error. These errors can eventually lead to more time spent troubleshooting issues that arise from TEST and LIVE systems not behaving the same. Let’s learn why SyncSolve® complements MEDITECH’s Corporate Management Software (CMS) so perfectly.


MEDITECH customers are receiving the MEDITECH CMS solution with new Expanse implementations. The CMS system was first developed for larger corporate healthcare organizations and Integrated Delivery Networks (IDNs). It was a way for large organizations to standardize their EHR content across all facilities. Dictionaries could be built in one “standards” HCIS, and CMS background jobs would propagate those dictionary edits to target corporate HCIS. One great feature is that the system allows certain fields and certain dictionaries to be “localized.” For instance, it would not make sense to standardize and centrally control the location dictionary. There are a lot of dictionaries and fields that are very specific to designated facilities.

CMS not only keeps disparate MEDITECH facility EHRs synchronized, it is also used to keep the separate MEDITECH TEST and LIVE system’s synchronized. This is the primary reason why MEDITECH started offering it to smaller organizations with Expanse. CMS is a big step in the right direction for dictionary management. However, it is not the be-all end-all. There are still some challenges with the CMS system. Here are some limitations:

  • A lot of dictionaries are not controlled/propagated by CMS.
  • The full automation of CMS is a positive, but there are times when more granular control and analysis is desired.
  • When customers are going through a MEDITECH update, propagation from one MEDITECH release to another is not possible.
  • Sometimes things do not sync as intended due to CMS or other MEDITECH software problems. These issues can be difficult to detect without an additional dictionary compare tool.


The HCI Solution has several customers that utilize both CMS and SyncSolve® for their dictionary management needs. Why? SyncSolve® is 100% CMS compatible – dictionaries that are synced by SyncSolve® to the “standards” HCIS are propagated to all CMS target HCIS’s, even dictionaries from other MEDITECH releases. SyncSolve® can be used to compare any dictionary from one MEDITECH system to another, even across Universes! CMS does not synchronize every dictionary. SyncSolve® synchronizes nearly every dictionary. Here are just a few of the many examples of dictionaries that can be synced by SyncSolve® that cannot be synced by CMS:

  • Person Dictionary
  • Canned text
  • Menu/Procedure Access
  • CWS Resource
  • Oncology Treatment Plan

When MEDITECH customers are going through an update, SyncSolve® can synchronize between software releases, where CMS cannot. Some of our customers even use SyncSolve® to compare dictionaries between HCIS’s to make sure that CMS is functioning properly. SyncSolve® provides much more granular control than CMS does. It allows you to sync specific fields within specific dictionaries, something that CMS does not do. It allows users to create very specific use-cases for hospital initiatives.


  • Available in MAGIC, CS, M-AT 6.x, Expanse
  • Compare dictionaries across databases, HCIS’s, and even UNV’s
  • View discrepancies field by field
  • View dependent dictionary deficiencies
  • Generate detailed work lists and reports
  • Launch directly to dictionary Enter/Edit screens from work lists
  • Schedule and automate dictionary maintenance
  • Monitor dictionaries for changes
  • Auto-synchronize dictionary and dependent dictionary differences
  • Build custom use cases
  • Manage access controls for decentralization
  • Copy dictionaries and parameters
  • Distribute reports by email


CMS is a great tool for automating the propagation of dictionary edits. It goes a long way in helping to keep your TEST and LIVE systems synchronized. However, it does not cover everything. SyncSolve® will fill in those gaps. SyncSolve® is the perfect tool to help complete specific use-case projects along with CMS, like building new assessments in TEST prior to syncing them to “standards”, or building users in LIVE and then syncing them to TEST. It is also useful for migrating previous release TEST dictionaries to your new TEST release ring, for those dictionaries that MEDITECH could not copy. Hopefully now you see why SyncSolve® perfectly complements MEDITECH’s Corporate Management Software (CMS).


If you don’t have MEDITECH’s CMS system, then SyncSolve® is the only real option for you, that truly works. SyncSolve® is available for MAGIC, C/S, 6.x, and Expanse.

REGISTER to attend our SyncSolve® webinar May 12th, 2 pm EST and learn how SyncSolve® can help clean up your Dictionaries.

To learn about The HCI Solution’s SyncSolve®  CLICK HERE.




Upgrading MEDITECH and Consolidating our Master Patient Index (MPI)

Consolidating Master Patient Indexes

November 12, 2019                     Written By: Jim Smith, Director of Data Services

As the HCIS landscape shifts, we are often finding ourselves involved in major upgrade and conversion endeavors on behalf of our customers. Whether upgrading to the latest version of MEDITECH, or moving to a completely new platform, our sites are confronted with a host of challenges. While this threatens to be a frustrating process, it also presents unique opportunities to improve inefficient workflows, normalize dictionaries, retire obsolete reporting methods or legacy applications, and ensure that the mistakes made in the current system are not inherited during the upgrade process. Preemptive decisions need to be made, regarding resource allocation and timelines to ensure goals can be accomplished at the appropriate stage of the conversion process. When upgrading MEDITECH, consolidating our Master Patient Index (MPI) should be taken into consideration.

Over time, an MPI database inevitably accrues duplicate records, due to misidentification or workflow errors, and if we hesitate before starting our reconciliation effort, we can easily miss the window of opportunity to move forward with a clean MPI. We all understand the importance of eliminating duplicate records to ensure the data integrity of each patient’s medical history, but there are other considerations. Improved operational efficiency and cost reduction are major factors. Missing this window will complicate the process of reconciling duplicates in the future. For example, when upgrading to a new version of MEDITECH we often need to maintain a historical link with our previous systems. After conversion, any duplicates merged in the new HCIS will also need to be merged in the prior HCIS to maintain the accuracy of this historical link, but if the records are merged prior to conversion, this extra step is unnecessary.

It is important not to minimize the effort involved with the resolution process, as this is the primary reason for missed timelines. There is no fully automated solution that will ensure there is no risk of creating additional errors, such as patients merged in error. Our focus needs to be on finding ways to be as efficient as possible during the resolution process, and on approaching the problem in a way that maximizes the results of our time spent on each task, without creating additional risk.

Our response to the challenges posed by the duplication resolution effort is our MPI MergeIT™ application. MergeIT™ integrates with the existing MEDITECH tools used to reconcile duplicates, to enhance and facilitate the process of accelerated resolution, without the risk of an unintended or erroneous merge. MergeIT™ generates customizable worklists of potential duplicates by comparing demographic data at the database level. This lets us prioritize the most likely duplicates and defer the potential duplicates that will need more investigation to a second phase of the process. When a duplicate is identified, it can be immediately handed off to the MEDITECH merge routine, to eliminate any need for redundant manual entry, and to ensure that all values are merged as intended. As the selection criteria is expanded to the second phase, any potential duplicates that are confirmed to be unique, can be permanently filtered from the identification routine, to ensure we never needlessly investigate the same medical record number combination more than once.

The upgrade process does not have to be daunting, and presents many opportunities, but these opportunities can be easily missed if we don’t take initiative and engage the necessary resources early enough in the process. To ensure we can consolidate our MPI prior to conversion, and that we are not creating additional work for ourselves and inheriting the corruption present in our current system, we need to begin well before the conversion is underway.

REGISTER to attend our MergeIT™ webinar April 14th, 2 pm EST and learn how MergeIT™ can help clean up your Master Patient Index.

To learn about The HCI Solution’s MPI MergeIT™  CLICK HERE.




Interface Engine Maintenance:

Caring for the Heart of Data Flow

Interface Engine Services

October 23, 2019    Written By: Pedro Jimenez, Director Interface Engine Services

An interface engine can act as the heart of dataflow in an enterprise, as it facilitates the transfer of data from one application to another. As with any system, there are certain things to be mindful of in order to keep dataflow moving in a reliable manner 24 hours a day, 7 days a week. After working with numerous interface engines for over 16 years, I have inevitably run across several “best practices” that have withstood the test of time. Among these are: interface and application endpoint documentation, interface contingency alerting, server health monitoring, and high-availability assurance.


Good documentation provides a map of the interfaces in place. The best integration shops have them, unfortunately this is more an exception than the rule. Good documentation does not need to be complicated. While Visio diagrams are ideal, a spreadsheet can be a great start. Each interface can be assigned a row and given a unique id on a spreadsheet; include the interface name, a brief description and a column with a reference (typically a unique identifier) to the interface(s) it feeds, separated by a comma. You can always add other columns later to indicate the interface’s dataflow source and destination address. Keep it simple, then you can go for greater sophistication later.


Unless you have a very small number of interfaces, a dedicated 24 x 7 help desk, or adopt a reactive stance by which you depend on your application users to notify you of delayed data updates; having automated alerting is a must. In a nutshell, automated alerting lets key people in your organization (including vendors) become aware of interface problems before your users notice something is wrong with the timely interfacing of data. Automated alerting allows for interface connectivity, message processing or delivery problems to be reported via a dashboard, email or texting medium. Just about every interface engine has these capabilities built into it. Having a dedicated SMTP email account can be a starting point for automated, real-time alerts that will reach key support team members quickly and proactively.


Alerting does not have to stop at the interface engine. The server in which the interface engine resides requires continuous monitoring. If available, the interface engine’s built-in alerting system may monitor important server resources such as disk storage space, CPU, RAM, network or disk resource utilization and alert if any thresholds are crossed. Other interface engine components such as database servers, should also be monitored and alerted. There are also third-party tools, such as The HCI Solution’s Sentry, that can supplement, if not provide full monitoring support for these vital server monitoring and alerting functions.


While alerting provides insights into what is happening now, it should still be considered a reactive component of an interface management regime. A “trending approach” should be adopted as a proactive component of best interface engine management practices, as it can help you spot problems BEFORE they exacerbate into a dataflow stoppage or even data loss.

System resource utilization trends will not only indicate when there is trouble brewing (i.e. dwindling disk space or higher than usual network bandwidth utilization) but can also be of use when it comes to long-term planning. Tracking CPU, RAM, and disk usage trends should be placed alongside interface engine utilization metrics on a daily, weekly and monthly basis with message/transactions as a primary source from which to derive overall data usage bandwidth. A calculated average message size multiplied by the number of messages for each interface can provide a more accurate throughput metric (MB/GB’s per 24 hour period) than total message counts, as message sizes can vary greatly. Data mining interface engine databases or message logs are alternate ways of accomplishing this key task.


If you really want to be proactive, run periodic “stress” tests by submitting a batch of messages through a test interface, recording the time it takes for that batch to process and trending the “time to delivery” over time. This is a great way to measure the engine’s message processing performance vis-à-vis the engine’s current workload. Other strategies such as virtual machines and high-availability software solutions such as Microsoft Cluster Server can make it easy to keep the interface engine dataflows running 24 x 7 while operating system patches, hardware or any other events that may bring the engine offline.


Similar to your own biological heart, an interface engine is the heart of data flowing throughout your enterprise. Just like frequent exercise, good eating habits and routine monitoring of your body’s vital signs throughout your lifetime can keep your cardiovascular system in top shape, the same applies to interface engines. Understanding what those components are, tracking their performance, and utilizing a regime of software tools and best practices, will keep you running 24 x 7 and avoid the nasty surprises that occur several years down the road when you least expect them.

To learn about The HCI Solution’s Interface Engine Services  CLICK HERE.



The Three W’s of

Archiving Legacy Systems


September 19, 2019                         Written By: Ken Hoffman, Owner/President

Who doesn’t love sunsets? Data Centers across the country are packed with old, legacy systems that cost hundreds of millions of dollars to maintain. Many avoid sunsetting because they’re just unsure what to do with the data. We call it eating an elephant, one bite at a time. First it helps to determine What, When, and Where; the three W’s of archiving legacy systems.

1) What do you want to retain? One of the easiest questions to ask, but the most difficult to answer. The most common answer from stakeholders is “all of it”; which isn’t practical. Whether discrete or image, clinical or business office, asking these three questions can help; is the data: Clinically relevant? Legally required? Business relevant?

2) When to start the extraction and conversion? This question is two part; when to start the extracts, pre or post conversion to the new system. In addition, what date range will be extracted? Evaluating historical relevance can be cause for pause. Hospitals often end up not choosing a time frame of extraction because it is difficult to determine.

3) Where do you want to store data? We find most sites will store clinical data in an existing scanning/archiving system via a COLD feed. There might be data you don’t want in your active archiving system but rather in a separate vendor neutral archive (VNA). HCI can provide a VNA should you want separate storage.

Whether you want discrete or image retention, The HCI Solution will walk you through the  three W’s of archiving legacy systems and you can watch the beauty of a sunset with an experienced partner. We’ve helped many sites sunset their legacy systems, saving them millions.

CONTACT US for more information.