Data & Trust
Every engagement we take on starts from the same place. A professional services firm is handing us access to its clients, its matters, and its competitive edge. Data posture is the first thing that gets tested, and it should be. This page sets out how we handle it, in the same words we will put in a contract.
How We Handle Data
The Technical Posture
Every client sits in its own environment by default. One firm's data never shares a workspace with another's. Private server options are available for clients with heightened requirements.
Where models are used (Anthropic, Google, OpenAI) we access them through enterprise API channels that contractually do not train on customer data. Custom logic sits on top for client-specific accuracy.
The majority of our automation logic is hard-coded software rules. Deterministic, inspectable, repeatable. Model calls are reserved for tasks where a rule will not do. Less surface area, less drift, fewer surprises.
AI is never the decision-maker on matters that affect a client, a customer, or an employee. Its role is limited to analysis and first drafts. A named person still carries the call.
We pull the smallest set of fields needed to do the job, and nothing more. Less data moved, less exposure when something goes wrong, less cleanup on exit.
When an automation hits something it cannot handle, it stops and flags it. Nothing is quietly skipped. Nothing is half-done in the background. A broken run is visible the same day, not discovered three weeks later in an audit.
The Regulatory Picture
Australian AI Compliance, In Plain Terms
Australia does not have a standalone AI Act. AI use is already captured across four layers of existing legislation. These are the ones that matter most for a professional services firm.
The primary instrument. From December 2025, APP entities must disclose in privacy policies where computer programs, whether AI, machine learning, or rule-based tools, make decisions that significantly affect individuals. Enforcement exposure for serious or repeated breaches is material.
Algorithmic bias in hiring, promotions, and termination is actionable. AI-assisted decisions can be subpoenaed. The reasoning has to stand up to scrutiny after the fact, not just at the point of use.
Covers discrimination claims arising from AI use in employment processes. Particularly relevant to any firm using AI in talent acquisition, performance review, or workforce planning.
Additional layers depending on the nature of the deployment. Client-facing claims, generated content, and training data all sit in scope. State-based obligations differ between Victoria, New South Wales, and other jurisdictions.
Next
Send Us The Hard Questions
Your legal, risk, or IT team has a list. Send it. We will work through it in writing, and bring in counsel where it helps.