– A Whitepaper Based On Survey of Aussie Professionals

Author: Houratious Albert, AI Governance Consultant, Veynad Pty Ltd

The rapid proliferation of generative AI tools such as ChatGPT, GitHub Copilot, and Notion AI has made artificial intelligence accessible to a broad range of knowledge workers. To better understand how organisations are responding to this shift, Veynad Pty Ltd conducted a survey exploring AI tool adoption, usage patterns, and readiness for responsible implementation.

The results reveal that nearly all respondents (95.7%) have used AI tools in their work within the past six months. However, this adoption has been largely informal: 70% of users adopted AI independently, with minimal support or structured guidance from their organisations. Confidence in using AI responsibly is high (69.6%), yet fewer than half (47.8%) of respondents have received any form of training, and policy awareness is inconsistent.

This whitepaper presents the findings in detail and interprets them through a lens of executive accountability, operational risk, and AI governance maturity. It highlights key gaps and opportunities aligned with ISO 42001, particularly around leadership, competency building, and responsible innovation, and concludes with actionable recommendations for executive teams.

If you are interested in using a similar set of questions in your own organisation, check out my suggested questions here.

The survey attracted responses from professionals across a diverse range of sectors. The majority of participants came from regulated or technology-oriented industries, where responsible AI adoption is especially critical:
• Banking and Financial Services: 26.1%
• Logistics: 8.7%
• FinTech: 8.7%
• Government and Regulatory: 4.3%

This industry mix offers valuable insights into both the drivers and challenges of AI adoption in environments where compliance, security, and operational consistency are high priorities.

Key Findings

The survey revealed a number of clear trends about how AI is currently being used in the workplace, and where organisations may be falling behind in their governance and support efforts.

Widespread Adoption, Largely Self-Initiated

A significant majority of respondents (95.7%) reported using AI tools such as ChatGPT, Copilot, and GrammarlyGO in their day-to-day tasks. Most of this adoption was self-driven, with nearly 70% indicating they started using AI on their own. Only 17.4% said the tools were formally introduced or recommended by their organisation.

Limited Training and Support

While many users are enthusiastic and confident in applying AI to their work, few have received structured training. Just 47.8% of respondents had completed any form of AI-related training, and only some of that was provided by their organisations. This suggests a reliance on informal, self-directed learning rather than a coordinated enablement strategy.

AI Recognised as a Productivity Enabler

Respondents highlighted several use cases where AI added value, including drafting emails, summarising documents, generating interview questions, and supporting code development. Many described these tools as providing “peace of mind,” “independence,” or “solutions at their fingertips.”

Confidence High, but Policy Awareness Uneven

Approximately 70% of participants said they felt confident in using AI tools responsibly at work. However, this confidence isn’t necessarily underpinned by clear organisational guidance. Only a portion of respondents reported having read an official policy, while others were either unsure or unaware if one exists.

Common Concerns: Accuracy and Privacy

Despite the enthusiasm, users acknowledged risks. The most frequently cited concern was the potential for incorrect responses. A few also raised issues related to data privacy and uncertainty about how far they could rely on AI-generated outputs within the scope of their roles.

The findings reveal a clear pattern: AI adoption is happening quickly, but organisational readiness is lagging behind. For executives, this presents both a risk and an opportunity.

Governance Is Not Keeping Pace with Adoption

The fact that most AI use is self-initiated suggests that organisations may not yet have formalised their approach to AI governance. In some cases, this can lead to productivity gains. But without guidelines, it also creates uncertainty. Employees may not know what kinds of data are appropriate to enter into AI tools, or whether their use aligns with company policies.
From an ISO 42001 perspective, this highlights gaps in leadership commitment, policy communication, and risk management structures.

Capability Gaps Limit Responsible Use

Confidence among users is encouraging, but confidence without support can be misleading. Only a minority of users have completed training, and most are relying on intuition or peer examples to guide their use. This lack of structured enablement makes it difficult for organisations to ensure responsible use at scale.

ISO 42001 Clause 7 outlines the need for awareness, education, and role-specific competencies to manage AI-related risks – areas where most organisations in this survey appear underdeveloped.

AI Use Cases Show Clear Value

Despite governance and capability gaps, users are discovering real benefits. Tasks like writing, summarising, and basic research are being performed faster and with less friction. This shows that even informal use can drive productivity gains. The challenge now is to harness that value safely and strategically.

Unclear Policies Create Risk Exposure

When policy awareness is low, it becomes harder to enforce standards or respond to incidents. In regulated sectors, this could increase exposure to compliance breaches or reputational harm. Establishing clear, well-communicated AI use policies is a foundational step toward safe and scalable adoption.

AI is already being used across your organisation even if you haven’t formally introduced it. The challenge now is to catch up with governance, provide the right support, and guide AI use in ways that align with your risk appetite and business priorities. Based on the survey results, here are four priority actions for executive teams:

Create and communicate simple, practical guidelines that explain when and how AI tools can be used. Include examples relevant to your work context, and clarify what types of data should never be entered into third-party AI tools. Make the policy easy to access and revisit regularly as technology and risks evolve.

Employees are already experimenting with AI often successfully. Instead of restricting this behaviour, create safe environments where innovation can thrive with the right guardrails. Encourage teams to share what works, flag what doesn’t, and explore AI’s value within ethical and legal boundaries.

Generic AI awareness isn’t enough. Tailor your support to different job functions – for example, legal teams may need training on prompt auditing and redaction, while marketers might benefit from workshops on ethical content generation. Structured, ongoing learning builds both competence and confidence.

Standards like ISO 42001 offer a strong foundation for AI governance. Begin assessing your organisation’s readiness across areas such as risk controls, lifecycle accountability, transparency, and continuous improvement. These elements are not just compliance issues they are also strategic enablers.

AI is no longer a future trend. Employees are embracing these tools quickly, often ahead of official policies or training programs. This grassroots adoption has delivered early wins in productivity and efficiency, but it also introduces risks that can’t be ignored.

For executive leaders, the path forward is not about slowing down AI usage. It’s about building the structures that make its use safe, consistent, and aligned with organisational goals. That means treating AI like any other transformational capability: with clear leadership, strong governance, and a long-term view.

Organisations that act now by investing in awareness, enabling responsible experimentation, and aligning with emerging standards like ISO 42001 will be better positioned to unlock the full value of AI while maintaining trust, security, and accountability.

The future of AI at work is already taking shape. Leadership today will define whether that future is well-governed and sustainable or reactive and fragmented.

For further discussions on AI Governance, please engage with us via our LinkedIn page
https://www.linkedin.com/company/veynad

Acknowledgements

I would like to extend my sincere thanks to all the professionals who took the time to participate in this survey. Your openness in sharing your experiences has been invaluable in helping us better understand how AI is being adopted and perceived in the workplace.

On behalf of Veynad Pty Ltd, I also wish to acknowledge the contributions of individuals across banking, government, logistics, fintech, and technology sectors. Your insights have helped shape a more informed and practical conversation around the future of AI in Australian organisations.

Image credits : Steve Johnson and Falco Masi from Unsplash.


Disclaimer

This whitepaper is provided for informational purposes only and does not constitute legal, regulatory, or professional advice. The findings and interpretations are based on survey data collected anonymously and reflect the views of respondents at the time of participation.
While care has been taken to ensure accuracy, Veynad Pty Ltd makes no warranties or representations regarding the completeness or suitability of the content for any specific purpose. Organisations should consult relevant experts when making decisions about AI governance, compliance, or implementation strategies.Reference to ISO 42001 and other frameworks is intended as a general guide only and does not imply certification or formal endorsement.