The Big Four - the 4 basic "time eaters" of controllers

The Big Four – the 4 basic “time eaters” of controllers

In our work, we repeatedly encounter several recurring challenges faced by controllers. The problems associated with these issues become more acute in their impact the greater the scale and complexity of the processes in an organisation.

The controlling challenges we have identified relate mainly to the areas of process management and their labour-intensity and complexity of issues. The problems arising here mean that controlling does not have the opportunity to focus on the area of analysis, inference or recommendations. Instead, it is forced to devote a lot of time to areas that should only form the background, the hinterland of its activities. The challenges of controlling listed below (as well as other, more specific ones) can be easily met with the right solutions.

Challenge 1:

Designing forms, collecting data and controlling the process.

One of the most labour-intensive tasks faced by controlling is the creation of new forms or the updating of existing ones. Each time, it is a busy activity, highly prone to errors. The time-consumption here is further compounded by the process of sending out and collecting files. Any customisation of the forms also requires a great deal of time and work – if this is not done, the transparency of the forms may suffer. Once distributed, files are not flexible in terms of changing scopes, assumptions, rules and calculations. Restrictions are imposed on the whole process due to the lack of efficient communication with users. The resulting errors lead to a proliferation of files, resulting in different versions of spreadsheets containing different data. User modifications are not subject to system control. The final consolidation of the data from these files often forces users to re-create and re-fill forms to eliminate errors and inaccuracies. The whole process becomes extremely tedious and difficult to control.

The solution to these problems is to implement a central model that is accessible to all users. This allows changes to be automatically visible in controlling – there is no need to distribute files and consolidate data from multiple sources. In addition, the problem of data discrepancies does not arise – the data contained in the system presents one true and valid version. The universal solution with a central system is complete and supports both controlling and analysis and reporting. The user interface is clear, intuitive and, most importantly, uniform. The system allows top-down quality control of the entered data. An efficient architecture with a high-performance database allows simultaneous use of the system by multiple users.

Challenge 2:

Managing changes, data versions and scenarios.

Managing data versions and scenarios is a non-trivial task that, without systemic support, is prone to frequent errors. Duplication of data leads to a build-up of inaccuracies. The proliferation of data versions and scenarios requires too much commitment and effort on the part of the data manager. Additional problems arise when there is a need for frequent forecasting. The complicated procedure of merging multiple versions of data contributes to unsatisfactorily long waiting times for results. When software performance constraints and long calculation times are added to this, data and data change management becomes inefficient. Making rapid changes and top-down modifications at some stage becomes impossible, and data copying is used as a temporary measure. Ultimately, managing distributed files becomes difficult and time-consuming.

The problem of duplicate data and scenarios disappears when using a multidimensional database. It is a solution that offers efficient data administration and easy control of the data model. The ability to use a wide range of administrative tools, adapted to different user groups (e.g. Excel, application, web), is also an undoubted advantage here. The multidimensional database is an extremely powerful solution, and its speed is particularly evident when working with large, diverse sets of big data. Thanks to the central database, the creation of new scenarios does not introduce data dispersion, and data copying works efficiently and is not subject to the risk of human error. Thanks to the use of built-in functionalities (e.g. splasher), the time spent on data allocation is reduced to configuring the mechanism.

Challenge 3:

Combining data from different areas and sources.

Consolidating data from different sources is another area over-consuming controlling time. This issue requires the manager to link multiple thematic budgets together. As the data comes in from different users, it is characterised by multifaceted levels of detail, different scopes and assumptions – generally speaking: it is heterogeneous. Any differences in data formats and structures, together with a lack of alignment between budgeting and reporting cycles, in effect imply overly long forecasting and budgeting cycles. Inconsistent data requires multiple reconciliations and error correction. The effort required to combine them is inadequate for the results.

Solutions with off-the-shelf models have the clear advantage of offering the existence of a central financial model, not susceptible to the problems indicated. Furthermore, it is possible both to use off-the-shelf topic models and to create any new ones. The central model is easy to extend with additional models. The complexity of the new models created is in practice unlimited. Built-in management functionality and the ability to configure and extend them are also important here. Another advantage of working with off-the-shelf software is the ability to use existing templates (Capex, Fleet, Marketing…).

Challenge 4:

Answering additional questions with regular reporting.

Due to the different needs of users, there is often a need to personalise reports and forms. This results in the generation of too many standard reports. Furthermore, linking plans, budgets and forecasts to performance becomes cumbersome when there are no sophisticated tools to do so. Once again, the effort and time required for data acquisition is too great and inadequate for the results obtained. In addition to the labour-intensive distribution of reports, in the end too little time may be devoted to analysis. As a result, the conclusions drawn are incomplete. The biggest problem arises when there is a demand for customised questions and analyses. The response to user needs is then noticeably slow. This is perhaps the most acute of the problems identified, as often the organisation does not get the information when it should have it. This can result in an inconsistent basis for management decisions, different interpretations and results.

Off-the-shelf systems offer users the full capabilities of Business Intelligence solutions, in particular the convenience of pre-defined reports and analyses. The broad spectrum of data access paths and report formats (web, Excel, Word, PowerPoint, PDF, iPad…) is also an undoubted advantage here. The off-the-shelf software allows for quick and efficient analyses – also of the self-service type. The software also provides automatic reporting capabilities and automatic report distribution, saving users time. The ability to fully integrate with legacy systems and data sources is also not insignificant.

These are just the basic (most commonly identified by us) and most impactful challenges. If your needs are more specific or you would simply like to learn more about our experience in, for example, sales forecasting, tax forecasting, HR controlling, predictive Analysis & Planning, mobile reporting…. or other controlling topics, do not hesitate to to contact us.

Do you have a question? Fill in the form and our consultant will get back to you.

    I have read and accept Cogit Privacy and cookies policy.