Kohlver AI

Data & Trust

Every engagement we take on starts from the same place. A professional services firm is handing us access to its clients, its matters, and its competitive edge. Data posture is the first thing that gets tested, and it should be. This page sets out how we handle it, in the same words we will put in a contract.

Walk us through your data posture
A secure technical environment

Our Position

Confidentiality sits at the centre of every engagement.

We design from the data outward. Workflow follows.

How We Handle Data

The Technical Posture

Siloed client environments
01

Every client sits in its own environment by default. One firm's data never shares a workspace with another's. Private server options are available for clients with heightened requirements.

Foundation models through enterprise APIs
02

Where models are used (Anthropic, Google, OpenAI) we access them through enterprise API channels that contractually do not train on customer data. Custom logic sits on top for client-specific accuracy.

Rules-first, not model-first
03

The majority of our automation logic is hard-coded software rules. Deterministic, inspectable, repeatable. Model calls are reserved for tasks where a rule will not do. Less surface area, less drift, fewer surprises.

No AI decision-making
04

AI is never the decision-maker on matters that affect a client, a customer, or an employee. Its role is limited to analysis and first drafts. A named person still carries the call.

Minimum-viable data
05

We pull the smallest set of fields needed to do the job, and nothing more. Less data moved, less exposure when something goes wrong, less cleanup on exit.

Failure is loud, not silent
06

When an automation hits something it cannot handle, it stops and flags it. Nothing is quietly skipped. Nothing is half-done in the background. A broken run is visible the same day, not discovered three weeks later in an audit.

The Regulatory Picture

Australian AI Compliance, In Plain Terms

Australia does not have a standalone AI Act. AI use is already captured across four layers of existing legislation. These are the ones that matter most for a professional services firm.

Privacy Act (Commonwealth)
01

The primary instrument. From December 2025, APP entities must disclose in privacy policies where computer programs, whether AI, machine learning, or rule-based tools, make decisions that significantly affect individuals. Enforcement exposure for serious or repeated breaches is material.

Anti-Discrimination Act
02

Algorithmic bias in hiring, promotions, and termination is actionable. AI-assisted decisions can be subpoenaed. The reasoning has to stand up to scrutiny after the fact, not just at the point of use.

Fair Work Act
03

Covers discrimination claims arising from AI use in employment processes. Particularly relevant to any firm using AI in talent acquisition, performance review, or workforce planning.

Consumer Law, IP & Copyright
04

Additional layers depending on the nature of the deployment. Client-facing claims, generated content, and training data all sit in scope. State-based obligations differ between Victoria, New South Wales, and other jurisdictions.

Next

Send Us The Hard Questions

Your legal, risk, or IT team has a list. Send it. We will work through it in writing, and bring in counsel where it helps.