AI in Financial Planning: Best Practices

François Lucas-Cardona

A.S.A, MBA, Pl. Fin.

Conseiller principal, développement et qualité de la pratique

Institut de planification financière

AI in Financial Planning: Best Practices

Since the end of 2022, artificial intelligence (AI) – and especially generative AI – has been rapidly welcomed into the daily lives of financial planners. It has made its way into our tools, our processes and even our discussions with clients, eliciting curiosity and enthusiasm, along with a certain uneasiness. The promises loom large, and yet the actual repercussions on the practice still defy definition.

This article draws on a panel held at the Institute of Financial Planning’s 2025 Congress in Québec City. The purpose of the article is to provide some concrete benchmarks for the responsible inclusion of AI, in keeping with the professional obligations of our trade.

AI in the daily practice of financial planners

AI is already firmly rooted in the practice, even though it is still often misunderstood. Many professionals use it for practical tasks: drafting emails, preparing meeting notes or explaining complex concepts to clients. Others use it to generate financial simulations, sort documents or structure preliminary analyses, saving time for discussions that offer greater added value.

Some tools go even further: they perform tax audits, optimize disbursement strategies or prepare tax return projections. Some advisors claim that these tools can shave off up to 90% of the cognitive load required for the construction and simulation of plans,1 allowing planners to focus on analysis, teaching and human relationships, but the gap between marketing promises and reality on the ground, between productivity and transparency, explains the discomfort that many professionals feel.


1 Finance et Investissement | Plans financiers : outils à la rescousse

Clear guidelines

Contrary to popular belief, however, the use of AI for financial planning is not drifting in a legal void. The duties of confidentiality, competency and diligence still apply in full. One central principle holds sway: if the F.Pl. cannot explain a recommendation offered by an AI tool, they must not present it. AI supports human judgment; it does not replace it.

Emerging use policies

To govern the use of these new tools, many firms are adopting internal AI use charters that establish authorized tools, data processing rules and verification and traceability mechanisms. Even a bare-bones approach – such as prohibiting the use of sensitive data without supervision or requiring a written record of validations – contributes to safer, more professional use.

Practical principles to remember

Three simple rules guide the responsible use of AI:

  1. Know where the information is flowing and where it is stored
  2. Always double-check results before presenting them to the client
  3. Document the entire process, including the contributions of AI tools, to ensure traceability and compliance

These principles are combined with rigorous management of digital security: access control, use of robust passwords and secure document archiving. This framework is essential for protecting both the client and the firm in our constantly changing technological environment.

In this modernization process, the emergence of advanced technologies – especially AI – is also transforming the very nature of financial advice. 

As explained by Charles Hunter-Villeneuve, a renowned collaborator of the Institute, “AI’s arrival in financial planning means, in my opinion, that part of the advisor’s value is shifted even further toward skills such as knowing how to ask the right questions, exercising judgment or sowing doubt. In taxation as in financial planning, it’s all about nuance.”2

Risk management

For organizations, AI does not just create “new risks”; it often amplifies existing risk categories. Operational risks (errors, service interruptions), legal and regulatory risks, reputational risks, cybersecurity and confidentiality issues: these can all be affected by the introduction of AI tools, especially if the tools are poorly understood or inadequately overseen.

In practice, AI is already being used in very concrete ways: helping to write documents, sorting and classifying files, supporting scenario analysis and generating reports or personalized communications. Each of these uses can be assessed using the risk management tools already in place in organizations: risk identification, evaluation, mitigation measures, control and monitoring mechanisms. In other words, it is not a matter of reinventing governance from the ground up, but of integrating these new tools into existing mechanisms.

Client experience

From the client’s perspective, AI often inspires a blend of fascination and concern. Some people see its promise of efficiency and modernity; others fear the dehumanization of the advisory relationship or the covert handling of their data. At the heart of these worries lies a simple question: who is really making the decision and on what basis?

In this regard, one strong idea arose of the panel discussions: viewing AI as an “intern” rather than a “guru.” Interns can do research, suggest formulations, propose scenarios or structure information, but they do not sign recommendations and they do not make final decisions. Those are the responsibility of the F.Pl., who exercises their judgment and takes into account the global environment and the advisee’s emotions and priorities. This way of looking at things, both internally and with clients, helps maintain trust while still allowing you to take advantage of AI’s real capacities.

AI in the service of teaching and trust

In March 2025, the Association de planification fiscale et financière (APFF) conducted a survey on the use of artificial intelligence in professions related to financial and tax planning. Here is one important finding that was revealed: In the face of the rapid development of these technologies, the respondents emphasized the value of human competencies. Empathy, professional judgment, critical analysis and communication are essential for overseeing, interpreting and enhancing the use of AI in professional settings.3

The primacy of human competencies should also play a role in the way financial planners present AI to their clients.

Offer jargon-free explanations

Artificial intelligence often elicits a level of mistrust in clients, who may feel destabilized by complex and abstract technologies, so it’s important for financial planners to use simple, instructive language to reassure and inform. One good method is to use accessible metaphors: for example, present AI as an “advanced calculator” or “assistant” that prepares options and analysis but does not make final decisions. The goal is to make the tool tangible while emphasizing that the advisor is still the pilot of the relationship.

Create engaging experiences

AI can transform the way we present financial plans: personalized visual scenarios, clear visualizations, one-page summaries. These tools improve understanding and stimulate the client’s decision-making, without trying to impress them artificially.

Reinforce transparency

Finally, trust is built on transparency: explain how and when AI is used, remind the client that the F.Pl. always checks and adjusts the recommendations and point out that listening and emotional intelligence are uniquely human.

Conclusion

Artificial intelligence is opening brand new prospects in financial planning, improving speed, accuracy and the personalization of advice. It automates repetitive tasks, improve risk anticipation and enriches the client experience. But this incredible technological tool must never obscure the fact that professional judgment, responsibility and human input are the heart of the advisory relationship.

With clear oversight, careful experimentation and collaboration among peers, financial planners can benefit from it while protecting the true pillars of our profession: diligence, ethics and trust.