During a recent upgrade to the database where I work, the external consultants in charge of the project updated the database during working hours. On the one hand, it was a complete disaster for my entire department for a business day. On the other hand, it provided me the opportunity to impress upon my employees the need for data integrity. Using this incident as a training device, I was able to explain in excruciating detail why data integrity is so important.
Accounting systems are made up of two parts: the database and the front end. The front end is what the use ultimately sees. It provides the functionality, the user interface, the reporting systems, etc. Examples of front end systems include Great Plains, ACCPAC, Peachtree, and others. Behind the front end, holding the transactions, is the database. Databases vary in type but essentially work the same. They can be extremely complex systems that are made more simple to use by the front end system. It is possible to access the transactions (with the right knowledge and security settings) directly from the database. Not many people know this, and it should not be used for many reasons, not the least of which is coming up.
As a controller for a company, I have to stand behind the financial statements and verify that they are - to the best of my knowledge - good. This means that not only do I have to verify that the numbers work, I have to confirm that my internal controls functioned properly. Then, on top of that, there is an implicit confirmation that the data itself is good, that the data integrity hasn't been compromised. If that integrity is compromised, it means that - as a controller - I cannot reasonably assure anyone that the numbers I provide are actually correct. There may be errors that are unseen in the data and cannot be detected. These errors could easily produce incorrect data, compiled into incorrect information, upon which incorrect decisions are made.
Almost amusingly, this is a best-case scenario. There is a larger, more far-reaching problem that can come of compromised data integrity.
Since the database contains all of the financial information of a company, it could easily be considered that the accounting system is the pulse of a company, and therefore the database is the blood. If there is a problem with the data, it will (most of the time) cause the database to shut down and cease functioning. The data becomes corrupted, and the accounting system can no longer start, let alone produce reports, cheques, invoices, or anything.
In order to correct this kind of error, there are three main strategies that can be utilized to recover the data. These are: restore from daily backup, identify and correct the error, or manually overwrite the corrupted section. What the best response to the error will be is dictated by the type of error. Sometimes it is a simple unbalance between two tables in the database, i.e. between the accounts receivable table and the general ledger posting. Note that this is a different kind of error than the control account being out of balance with the subledger account; that type of error can be corrected with a journal entry. Database errors cannot. Sometimes the error can be far more broad reaching, such as a situation where the debits and credits in the system do not match. This kind of error likewise cannot be fixed by a journal entry.
If the database is totally unrecoverable, or the possibility of fixing it severely outweighs the cost, the best route is to restore from the latest backup and redo the transactions that occurred in the intervening period. The advantage to this is that a) one can verify that the error doesn't exist in the older version, b) it is the most stable option because there is no possibility of improper amounts being recorded via a database transaction, c) it can be audited at a later date provided the appropriate backup is kept, and d) it will almost certainly correct the error. The main disadvantages are that a) there is the possibility for a lot of lost time due to having to redo the transactions, b) there is a very strong possibility of lost transaction in the intervening period, and c) unless the system error is truly understood as to the cause, there is a possibility of recurrence of the same error.
The second option, to identify and correct the error, is of a somewhat more limited scope. If the error can be corrected by running a query / script on the database itself, this option can be very strong. (I have witnessed an error and corrective action like this on a database wherein some general ledger accounts did not close to the retained earnings account automatically. This error was discovered at the January month end, after 29 days of transactions had been recorded.) This kind of fix works on specific, smaller problems, but requires a very high level of expertise to perform. Usually, this kind of activity is way beyond the scope and knowledge of a controller, and even if the controller could solve the problem himself or herself, it is very highly recommended that they engage an expert for two reasons: the first, that something may go wrong during the corrective action that requires an expert, and the second is that should something go wrong, it is very helpful to have an external party to point at. (Or didn't you know that's what consultants are for?)
The third option is to manually overwrite the corrupted section. Of these options, this is the worst for a few reasons. First off, it requires more expertise than the second option (running a script-driven repair.) Second off, databases function in very complicated manners. A simple general ledger transaction, for example, could contain the following information: journal entry number, date, account number, amount, line description, originating exchange rate, modified exchange rate, functioning currency amount, entry date, entry username, etc. These are not all of them, by far, but indicate the level of complexity that even a simple journal entry contains. To overwrite the affected section of the database would be difficult to ensure that all of the subtle links are corrected, and that the system works properly and as expected when complete. The kinds of errors that this can correct are also very limited in scope. This error is almost a worst-case scenario, because of these complexities and expertise required to complete.
The complexities involved in any of these scenarios is time-consuming and problematic. For each hour that is lost correcting a database error, the company is likely losing between two and three hours in the time it will take to recover to the point before the error occurred. This lost productivity can cost the company thousands of dollars for even a one-day error that gets fixed. The expense to the company will rapidly get out of control, and in a very bad case, could threaten the very existence of the company.
Given the different methods of dealing with a database integrity crisis, the best advice that I can give anyone, anywhere, and under any circumstances is this: don't let it happen. Database integrity isn't a hard thing to maintain. Controllers, accountants in general, and any information worker should be aware of the concepts that underlie the information systems they use. This isn't to say that every accounting clerk should know how to run an SQL script, but they should know the basic fundamentals of how database tables work, and know what signs to look for that may indicate compromised database integrity. Some may feel that such activity goes beyond the scope of accounting staff, but it is my opinion that accounting staff are the front line of defense of the information with which they work.
No comments:
Post a Comment