AI in tax offices: More than just tools
The discussion surrounding the use of artificial intelligence in tax firms is currently dominated by one topic: tools . Experiments are being conducted everywhere – be it with new AI assistants, agents, or automation add-ons. The temptation to "just get started" is great. But this is precisely where a significant danger lies: Those who fail to consider the legal and organizational fundamentals expose their firm to massive risks.
A recent case law example illustrates this clearly: On July 14, 2025, the Bremen Administrative Court ruled (case no. 2 K 763/23) that the automated determination of waste disposal fees may violate Article 22 of the GDPR. The court clarified that it is not sufficient for employees to merely initiate programs or conduct random checks – a genuine decision must be made by a human.
This finding affects not only public administration, but also the private sector – and thus also tax advisors, who are increasingly using AI-supported systems.
1. Automated decisions according to Art. 22 GDPR
Article 22 GDPR provides for a general prohibition:
No one shall be subjected to a purely automated decision which produces legal effects or significantly affects a person.
For tax firms, this means that processes such as applicant selection, staff scheduling, or client acceptance via scoring procedures may fall under this prohibition. Client processes such as credit assessments, risk analyses, or liquidity forecasts are also affected as soon as they trigger decisions that affect natural persons. It is not enough for an employee to simply "trigger" the system or check a box – what is crucial is whether they truly make their own decisions. This is precisely what the Bremen Administrative Court emphasized.
2. The permissible exceptions – and their limits
Automated decisions are only permitted in exceptional cases, for example, if they are absolutely necessary for concluding a contract, if there is a legal basis, or if the data subject has expressly consented. However, these exceptions are rarely applicable, especially in everyday law firm operations. Applications, client acceptance, or internal employee management remain a delicate matter: These require genuine human oversight, clear documentation, and transparent information requirements.
3. The EU AI Act as a second regulatory layer
While the GDPR protects the rights of data subjects, the EU AI Act regulates the technology itself. It is particularly relevant for tax firms when AI systems are used in HR and job applications, as these are considered high-risk systems . Systems for creditworthiness, scoring, or client selection can also fall into this category. The AI Act imposes strict requirements for such systems: risk management, bias checks, documentation, logging, transparency, and always human oversight.
This means that law firms have to observe two legal frameworks at the same time: Firstly, the GDPR , which asks whether a decision may be made automatically at all, and secondly, the EU AI Act , which examines how a system must be designed technically and organisationally.
4. B2B and B2C – what does that mean in practice?
Some believe these rules are only relevant to private customers. However, this is too short-sighted. The scope of application is obvious for clients, applicants, or employees in the B2C sector. But caution also applies in the B2B context: As soon as sole proprietors or freelancers are involved, this is personal data. Even in limited liability companies (GmbHs), overlaps can arise if managing directors or contact persons are involved. And regardless of this, the AI Act applies as soon as an AI system is used in a high-risk context – even if the decision only affects one company.
Just because something is technically possible doesn't mean it should be allowed.
5. Why “fast AI agents” are dangerous
Many law firms tell us: "We have an agency that can quickly put together an AI agent for us." While this sounds attractive in the short term, this is where the greatest risks lie. Just because something is technically possible doesn't mean it's permitted. Failure to consider GDPR and data subject rights from the outset creates compliance gaps. Without risk classification and documentation, the system violates the AI Act—with fines of up to €35 million. And an isolated AI agent without governance cannot be integrated into a law firm's overall strategy. The result is solutions that are neither legally compliant nor sustainable.
6. Dual expertise: Law + Technology
This is precisely where the difference lies between short-term IT solutions and a true trusted advisor strategy. We combine legal expertise (GDPR, EU AI Act, professional code of conduct for tax advisors) with technological competence (Microsoft 365, Azure, Copilot, Fabric). Instead of selling individual tools, we work with law firms to develop platform strategies that combine security, automation, and client focus. AI systems are implemented from the outset in such a way that they are not only technically functional, but also legally robust and organizationally embedded.
7. Conclusion: AI in the law firm needs strategy – not just technology
For tax advisors, this means that anyone who wants to use AI systems must be aware of the legal limits of automated decision-making and comply with both the GDPR and the EU AI Act in parallel. "Quickly building an agent" sounds tempting, but in practice leads to compliance risks and expensive rework. The right approach is strategically planned, legally sound, and technically sound implementations that integrate legal and technology. This is precisely the role of the Trusted Advisor – and this is precisely where we differ from traditional IT firms or consultants who only see one side of the equation.
Conclusion in one sentence:
Law firms that want to use AI don't just need tools – they need dual expertise in law and technology so that digitalization, automation, and compliance go hand in hand.
