An interface engine can act as the heart of dataflow in an enterprise, as it facilitates the transfer of data from one application to another. As with any system, there are certain things to be mindful of in order to keep dataflow moving in a reliable manner 24 hours a day, 7 days a week. After working with numerous interface engines for over 16 years, I have inevitably run across several “best practices” that have withstood the test of time. Among these are: interface and application endpoint documentation, interface contingency alerting, server health monitoring, and high-availability assurance.


Good documentation provides a map of the interfaces in place. The best integration shops have them, unfortunately this is more an exception than the rule. Good documentation does not need to be complicated. While Visio diagrams are ideal, a spreadsheet can be a great start. Each interface can be assigned a row and given a unique id on a spreadsheet; include the interface name, a brief description and a column with a reference (typically a unique identifier) to the interface(s) it feeds, separated by a comma. You can always add other columns later to indicate the interface’s dataflow source and destination address. Keep it simple, then you can go for greater sophistication later.


Unless you have a very small number of interfaces, a dedicated 24 x 7 help desk, or adopt a reactive stance by which you depend on your application users to notify you of delayed data updates; having automated alerting is a must. In a nutshell, automated alerting lets key people in your organization (including vendors) become aware of interface problems before your users notice something is wrong with the timely interfacing of data. Automated alerting allows for interface connectivity, message processing or delivery problems to be reported via a dashboard, email or texting medium. Just about every interface engine has these capabilities built into it. Having a dedicated SMTP email account can be a starting point for automated, real-time alerts that will reach key support team members quickly and proactively.


Alerting does not have to stop at the interface engine. The server in which the interface engine resides requires continuous monitoring. If available, the interface engine’s built-in alerting system may monitor important server resources such as disk storage space, CPU, RAM, network or disk resource utilization and alert if any thresholds are crossed. Other interface engine components such as database servers, should also be monitored and alerted. There are also third-party tools, such as The HCI Solution’s Sentry, that can supplement, if not provide full monitoring support for these vital server monitoring and alerting functions.


While alerting provides insights into what is happening now, it should still be considered a reactive component of an interface management regime. A “trending approach” should be adopted as a proactive component of best interface engine management practices, as it can help you spot problems BEFORE they exacerbate into a dataflow stoppage or even data loss.

System resource utilization trends will not only indicate when there is trouble brewing (i.e. dwindling disk space or higher than usual network bandwidth utilization) but can also be of use when it comes to long-term planning. Tracking CPU, RAM, and disk usage trends should be placed alongside interface engine utilization metrics on a daily, weekly and monthly basis with message/transactions as a primary source from which to derive overall data usage bandwidth. A calculated average message size multiplied by the number of messages for each interface can provide a more accurate throughput metric (MB/GB’s per 24 hour period) than total message counts, as message sizes can vary greatly. Data mining interface engine databases or message logs are alternate ways of accomplishing this key task.


If you really want to be proactive, run periodic “stress” tests by submitting a batch of messages through a test interface, recording the time it takes for that batch to process and trending the “time to delivery” over time. This is a great way to measure the engine’s message processing performance vis-à-vis the engine’s current workload. Other strategies such as virtual machines and high-availability software solutions such as Microsoft Cluster Server can make it easy to keep the interface engine dataflows running 24 x 7 while operating system patches, hardware or any other events that may bring the engine offline.


Similar to your own biological heart, an interface engine is the heart of data flowing throughout your enterprise. Just like frequent exercise, good eating habits and routine monitoring of your body’s vital signs throughout your lifetime can keep your cardiovascular system in top shape, the same applies to interface engines. Understanding what those components are, tracking their performance, and utilizing a regime of software tools and best practices, will keep you running 24 x 7 and avoid the nasty surprises that occur several years down the road when you least expect them.

To learn about The HCI Solution’s Interface Engine Services  CLICK HERE.