At the beginning of a new year, many people create lists of resolutions or goals for the future. Often, these resolutions are diligently kept for a few days or weeks only to be gradually dismissed or completely forgotten by the end of the January.
Just like you, companies make resolutions, too, either as part of a formal planning process undertaken late in the fiscal year in preparation for the upcoming annual measurement period or before commencing a project like a system implementation project. Organizations sometimes devise intricate ways to monitor and measure results against these goals. Yet, even with these seemingly adequate preparations in place, we continue to hear and read about project failures of massive proportions with nearly 50 percent of enterprise system implementation projects are deemed failures. After a poor go-live, there is plenty of finger-pointing, usually at the external implementation and vendor team. But, from my perspective, there is also a significant amount of blame that should be shared by the user organization.
So as an accounting professional and consultant, I would like to offer some resolutions – tips – for companies to keep when implementing a new computer system or undertaking an upgrade effort. These tips are based on an aggregation of observations during my work in troubleshooting and resolving problems after the fact and are not unique to any one client experience but are common to many engagements.
Provide for Sufficient, Dedicated User SMEs on the Project Team
Consultants come in all flavors, good and bad. But, even the best ones are not experts in your business processes. This is where your subject matter experts (SMEs) come in. However, they will not be able to provide their best input if they can’t dedicate the necessary, undivided time and attention to the project. Too often they are distracted by the call of their daily job requirements to focus on anything related to the new system. To alleviate this problem, arrange for “backfill” by temporarily reassigning employees or hiring temporary accounting staff to cover for the SMEs so that they can collaborate with the consultants to ensure that your business needs will be met.
Plan and Execute Adequate Testing
I wish I could count the number of times I’ve looked at testing plans and observed deficiencies. In today’s practice, it seems that the emphasis is on test scripts. Users are asked to provide input to the creation of test scripts, which are then executed, often by a consultant, and the end users are asked to sign off on the results. But, an informed end user (SME, manager or director), needs to look at the testing process to ensure that the entire business cycle is adequately tested. There may only be a handful of unique transactions that fall outside the norm of processing, but those need to be contemplated and tested, too. And don’t forget about testing month-end, quarter-end and year-end processing activities and reporting requirements.
For example, in an ERP implementation, a new business process was created which involved introducing a new module and interfacing it to both the external payroll processor and to a clearing account in the general ledger. The payroll entry from the outsourced payroll created the offsets to the clearing account. As part of the new business process, a number of new “codes” were created within the module and mapped to GL accounts, but the external payroll entry interface was not modified. Scripts were created and executed so that the transactions from the new module to the GL were tested. Since there was no change to the payroll to GL entry, that process was not tested. Everything looked fine until post go-live when the clearing account needed to be reconciled, and the results looked like alphabet soup and nothing matched. As a result, several months have passed by and now help is needed in order to identify the root causes, clean up the accounting, and recommend the modifications needed to correct the situation.
Notice that I said that the transactions (scripts) had been tested, but the failure occurred due to the mismatching of the new codes. It sounds simple, but the volume of transactions processed along with re-mappings between multiple entities due to an internal reorganization that happened concurrently with the implementation, and you can imagine how quickly the complexity of the problem compounded. Much of this turmoil could have been avoided had the month-end reconciliation process been sufficiently reviewed during the testing phase. We used to call this “cradle to grave” testing, and even though I haven’t heard that terminology used in a long time, the concept remains valid.
Be Realistic When Reporting Project Status
We all know that no one wants to be the bearer of bad news. But if there are numerous defects being detected, that fact needs to be presented to project management as early as possible in order to provide for additional resources and/or other remedial efforts or contingencies. There is nothing worse than hearing that project status is green all the way until the very end of the project and, then, suddenly a “show-stopper” appears seemingly out of nowhere. Perhaps if the yellow caution flag had been raised earlier, the situation could have been more effectively dealt with over time. It is rare that an issue that significant is known only at the last minute. Management should insist on accurate and realistic project reports.
Be Ready to Push Back on the Go-Live Date If the System Isn’t Ready for Production
It’s your choice, but you will either pay now or pay later. I’ve actually seen situations where the decision was made to go live with known critical issues only to deal with those problems in Phase 2. But, there can be embarrassing consequences when system glitches adversely affect customers, financial statements may be delayed, lawsuits are filed or the press publishes stories about the botched implementation. Before things get to that point, consider whether delaying the go-live for a month or even a quarter might not provide adequate time to get things right. It could mean the difference between a successful project and an impending disaster.
In summary, executing a successful systems implementation project involves ensuring the participation of any relevant subject matter experts (SMEs), performing adequate testing, communicating realistic project reports and possessing the willingness to extend the go-live date. With these tips, I hope 2014 will be a successful one for you and your organization.
For additional information, visit IMTA's Systems Implementation/Technology Integration webpage.
About the author:
Doris is an independent consultant who has 30 years of experience in the accounting field. She spent 17 years in public accounting including time at Coopers & Lybrand. She also has industry experience in the securities industry as assistant controller at Charles Schwab. In addition, as an IT/Finance Specialist, Doris earned experience in insurance. In the last 10 years Doris’ primary focus has been on ERP implementations (SAP/Oracle/PeopleSoft/JD Edwards), financial reporting/business intelligence systems and data warehousing. Doris served on the AICPA's IMTA Executive Committee Meeting for three years. She currently serves on the AICPA's CITP Credential Committee and the IMTA Business Intelligence Task Forces.