"This site requires JavaScript to work correctly"

Presseartikel

16 DFDDA

"No Signal - is the truth disappearing in the flood of data?"

30.10.2021 | DIT Public Relations

What does the word "truth" actually mean in a time when more and more data and more and more communication channels offer more and more opportunities to somehow convey virtually any opinion as truth? The already traditional "Deggendorf Forum for Digital Data Analysis" (DFDDA) under the direction of Prof. Dr. Georg Herde (Deggendorf Institute of Technology) asked itself this question. At the 16th forum event on Wednesday and Thursday, experts from tax auditing, tax consultancy and finance discussed the possibilities and limits of so-called artificial intelligence (AI). The motto of the event, which was virtual for the third time due to the Corona pandemic, was: "No signal - is the truth disappearing in the flood of data?"

Only recently, the Deggendorf Institute of Technology had dismissed its first graduates specialising in artificial intelligence, the Vice-President of DIT, Prof. Dr. Horst Kunhardt, told the audience in a welcoming speech. And he issued an admonition that all speakers at the conference echoed in one way or another: "We must never disregard the human being." At the end of all AI analyses, the critical mind of the human being is still needed, he said.

The guest speaker at the event, Prof. Dr. Klaus Mainzer, Professor Emeritus of Philosophy and Philosophy of Science at the Technical University of Munich, also spoke of "Responsible Artificial Intelligence". Mainzer has followed the development of AI and the expectations of AI on an international level and described in his lecture the path from the development of expert systems, which were supposed to support the doctor in diagnosis through purely logical reasoning from medical data, to the imitation of human brain functions and the search and detection of patterns in large amounts of data, to today, among other things, self-learning automats in automotive technology and the decoding of protein structures and thus the identification of viruses.

Mainzer took up the criticism that such systems are "black boxes", you can see what they find out, but not how they find it out. The systems are "trained like a dog. But in the end, you can still get bitten." Mainzer: "You need visibility, explainability." That means: an expert - a doctor, a specialist engineer - has to decide with his "domain knowledge" whether, for example, a medical diagnosis found automatically is plausible or not. The more influence technology has on people and their everyday lives, the greater the challenge for the training of people who work with this technology. Technology design is required; legal, social, ecological and economic criteria must be included in this design from the outset. Machine learning is "a huge success today", Mainzer said in the subsequent discussion. "But in the end, it is statistics" - with the uncertainties that come with it.

Using the example of the judiciary, Dr Tanja Ihden, FH Krems, who wrote her doctoral thesis in Bremen on "the relevance of statistical methods in jurisprudence", described successes and problems of argumentation with statistics. She is a member of the research unit "Statistics in Court", which was founded in 2014/15. According to Ihden, the number of court decisions in which statistical terms can be found has increased many times over in the past decades. The impetus for this comes from the judiciary itself, in almost all areas, whether it is the assignment of a DNA sample to a suspect, or reconstructions of an accident through scenarios, or the question of whether a man on whose computer photos from the borderline area of child pornography, which is not yet punishable, are found, is also very likely to possess prohibited photos. Judges are increasingly confronted with terms such as variance or confidence interval, which they must be able to correctly classify in their judgements, says Ihden. Being able to read statistics and evaluate statistical reasoning has become a key qualification for judges, he said.

Tawei (David) Wang, PhD, Associate Professor and Driehaus Fellow, Driehaus College of Business, DePaul University, Chicago, USA, demonstrated how risky the use of social media by employees can be for a company. His study shows how social media data can be used to find security weaknesses in companies' computer systems. Wang and colleagues used the LinkedIn network for their research, but consider the result transferable to other networks. They extracted thousands of personal data with information on current and former professional activities, areas of responsibility and locations and formed an exposure index for the company from the results. The result was a positive relationship between this index and the number of data breakdowns in the company's computer network.

The challenges posed by growing computing capacities, new processes and new procedures, such as AI, in companies also place new and expanded demands on auditing. Karsten Thomas, Partner IT-Assurance at BDO AG Wirtschaftsprüfungsgesellschaft, gave examples of where new, constantly developing tools can also help the auditor to improve the efficiency and quality of the audit. He does not see any fundamental competition between the goals of quality and efficiency. A high degree of automation also reduces the susceptibility to errors and can relieve the auditor. It could also make it easier to recognise anomalies. Thomas presented concrete tools from auditor practice. Outliers in analysed data are a particular challenge. The effort to clarify and assess them is high. There is great hope for AI procedures, but Thomas does not yet see their use at this point. One reason he gave was that, depending on the company, not every anomaly has to be an error, and that it is difficult to obtain training data for machine learning in companies with their individual design of data systems and processes.

A special form of mass data analysis for tax purposes was presented by Markus Ettinger, Diplom-Finanzwirt (FH) in the large and group tax audit of Schleswig-Holstein. The Foreign Tax Act stipulates how transfer prices between related parties or companies and their subcontractors are checked for taxation purposes to see whether they correspond to market realities. Transfer prices between comparable, unrelated third parties are used for this purpose. Ettinger described, according to the title of his presentation, "Visualisation and Benchmark Studies in Transfer Pricing". For this, comparable companies and criteria for comparability must first be found. Using example data, Ettinger described how a combination of machine screening, ideally interactive methods of visualisation and critical scrutiny of each individual step can be used to compile a comparison set on which the tax classification can be based, as long as the boundary conditions remain unchanged.

At the end of the conference, DFDDA Chairman Prof. Dr. Georg Herde posed the question in his own contribution that bridged the various topics of the conference: "Artificial Intelligence - A Solution for Auditing?" He started from the statement: "New techniques are giving AI a strong impetus." But what does AI "understand" about company data as it is available to the auditor? What correlations does it recognise on its own? From structured company data, the auditor extracts a multitude of tables - which, however, interact in a defined way, which is not automatically recognisable in the tables. And even in a flat table, only a human can recognise the meaning of the entries: Which chart of accounts was used? What kind of date is in the date field? Depending on where a posting record is, it can be wrong or right or even meaningless. From these and other considerations, he derived a perspective for developers: the input fields must be strongly standardised, but already an assignment of the attributes to the data fields cannot be done by an AI, but must be queried anew for each client and executed by hand. "There is no automatic testing of a programme logic," said Herde, referring - mutatis mutandis - to an insight made by the British computer scientist Alan Turing in 1937. In addition, framework conditions such as company structures, prices or laws are constantly changing, so that an AI system would have to be constantly retrained. "These and other problems of AI systems are currently not solvable," Herde noted. Research in this area is therefore important and sensible, he said.

His conclusion: "If an AI system does not say how it arrives at a result, then the examiner can only believe the results. Then he doesn't test."

Bild (DIT): The 16th Deggendorf Forum on Digital Data Analysis took place virtually.