DJIA38904.04 307.06
S&P 5005204.34 57.13
NASDAQ16248.52 199.44
Russell 20002060.10 8.70
German DAX18163.94 -238.49
FTSE 1007911.16 -64.73
CAC 408061.31 -90.24
EuroStoxx 505013.35 -57.20
Nikkei 22538992.08 -781.06
Hang Seng16723.92 -1.18
Shanghai Comp3069.30 -5.66
KOSPI2714.21 -27.79
Bloomberg Comm IDX102.90 0.64
WTI Crude-fut91.17 0.01
Brent Crude-fut86.57 1.15
Natural Gas1.79 0.00
Gasoline-fut2.79 -0.01
Gold-fut2345.40 33.50
Silver-fut27.50 0.46
Platinum-fut940.60 -5.50
Palladium-fut1007.40 -23.60
Copper-fut423.60 1.85
Aluminum-spot1815.00 0.00
Coffee-fut212.50 5.75
Soybeans-fut1185.00 5.00
Wheat-fut567.25 11.00
Bitcoin67976.00 304.00
Ethereum USD3328.10 56.27
Litecoin98.71 0.69
Dogecoin0.18 0.00
EUR/USD1.0862 0.0007
USD/JPY151.72 -0.02
GBP/USD1.2678 0.0016
USD/CHF0.9044 -0.0014
USD IDX104.28 0.08
US 10-Yr TR4.4 0.091
GER 10-Yr TR2.406 0.007
UK 10-Yr TR4.064 -0.005
JAP 10-Yr TR0.771 -0.004
Fed Funds5.5 0
SOFR5.32 0

Sub Markets

Topics

Financial Advisory  + Wealth Management  | 
Earned Autonomy, Not Auto‑Pilot: How Advisors Can Trust AI

Earned Autonomy, Not Auto-Pilot: How Advisors Can Trust AI

As artificial intelligence moves from experimentation to daily use across wealth management, the conversation is shifting away from speed and automation toward trust, accountability, and control. Advisors and compliance teams are no longer debating whether AI belongs in the workflow—they’re focused on how it gets implemented without undermining judgment, fiduciary duty, or client confidence.  

Brooke Juniper, CEO of TIFIN Sage, has been working closely with advisors, investment teams, and compliance leaders navigating this exact tension. Her view reflects a growing industry consensus: skepticism toward AI isn’t resistance—it’s maturity, and the firms getting this right are those designing systems where autonomy is earned, not assumed. 

CM: You’ve said skepticism about AI is a sign of maturity, not resistance. What are advisors actually skeptical of when it comes to AI in their day‑to‑day work? 

BJ: Advisors aren’t skeptical of technology for the sake of it, they’re skeptical of anything that affects client outcomes without being fully transparent. 

What they want to understand is what’s “under the hood.” Where does the data come from? How are recommendations generated? What guardrails are in place to prevent bias or errors? If an AI tool is influencing portfolio decisions or client communications, advisors need to know they can explain and stand behind it. 

In many ways, they’re evaluating AI the same way they would evaluate a new hire. They want to see judgment, consistency, and reliability before they trust it with meaningful work. 

CM: How do you define “earned autonomy” in the context of advisor‑facing AI, and what does it look like in practice inside a firm? 

BJ: Earned autonomy is the idea that AI shouldn’t be given full decision-making authority on day one. Its role should expand over time as it demonstrates accuracy, reliability, and alignment with firm standards. 

In practice, that progression tends to look like this: first, AI assists (i.e. summarizing meetings or organizing information). Then it recommends (i.e.  suggesting actions or highlighting opportunities). From there, it may act with guardrails, such as drafting communications that require approval. Only after sustained trust is built would firms consider allowing limited autonomous actions. 

The key is that autonomy is earned through performance and oversight and is not assumed. 

CM: Where are you seeing AI deliver real, repeatable value for advisors right now—and just as importantly, where do you think it shouldn’t be used yet? 

BJ: Right now, AI is most useful when it helps advisors make decisions faster and apply their best thinking more consistently across all clients, without taking control away from the advisor. 

It’s working particularly well in turning insights into client-ready outputs, drafting personalized commentary, summarizing portfolio impacts from market moves, and packaging recommendations in a way that’s easy to deliver in a meeting. It’s also effective in identifying next-best actions at scale, such as flagging which clients are most affected by risks like rates, concentration, style drift, or tax exposure, and surfacing suggested actions for review. 

At the book level, AI can help advisors and home office teams focus on the accounts that matter most, for example, those with the highest risk, the greatest opportunity, or the most misalignment with current guidance. It also improves meeting preparation and follow-through by making prep faster and ensuring notes, follow-ups, and action items don’t fall through the cracks. 

Where it shouldn’t be used without strong controls is in fully autonomous investment decisions that change portfolios without clear approval workflows, constraints, and audit trails. The same applies to compliance-sensitive client communications sent “as-is” without supervision controls or required disclosures. Any system that can’t clearly show what data was used, what guidance was applied, and why a recommendation was made introduces avoidable risk. 

The practical rule is straightforward: AI should do the heavy lifting (analysis, synthesis, prioritization, drafting), while the advisor retains final decision authority, especially where suitability, client context, and fiduciary judgment are involved. 

CM: How do you design AI tools so that ownership of the final decision is unambiguous—and clearly sits with the advisor, not the model? 

BJ: It starts with user experience and workflow design. 

The advisor should always be able to see what the system is doing and why. Recommendations should be explainable. High-impact steps like portfolio adjustments or client-facing messages should require review and approval. Lower-risk administrative tasks can be more automated, but still transparent. 

When AI is embedded into existing systems with clear checkpoints and audit trails, it reinforces that the advisor is in control. The model is supporting the process, not replacing professional judgment. 

CM: From your work with compliance leaders, what are the biggest misconceptions about AI risk, and what design choices move the needle on supervision and record‑keeping? 

BJ: People often think AI risk is just about stopping wrong or inappropriate answers. In reality, compliance teams care just as much about how an answer was created, where the data came from, whether the process was consistent, and whether decisions can be reviewed and supervised. 

One common misconception is that if you block the model from producing problematic outputs, the risk is solved. In practice, supervision requires traceability: understanding the inputs, the steps taken, any approvals involved, and what was ultimately delivered. 

Another misconception is that AI is either allowed or not allowed. A more practical approach is tiered permissioning and earned autonomy, where lower-risk use cases scale first and higher-risk actions require more controls. 

There’s also the perception that AI is inherently an ungovernable black box. With thoughtful design, it can actually be more governable than ad hoc human workflows because it can automatically log activity and enforce consistent guardrails. 

The design choices that make the biggest difference are fairly pragmatic: human-in-the-loop approvals for high-impact steps such as recommendations or client-facing outputs; clear audit trails showing what data was used, which enterprise guidance applied, what the model suggested, and what the advisor edited or approved; policy-based guardrails around product eligibility, disclosures, and channel constraints; role-based access aligned with the firm’s supervision model; and integration into governed systems like CRM and portfolio platforms rather than standalone “shadow” tools. 

Ultimately, the goal isn’t just to prevent mistakes, it’s to make oversight and record-keeping easier and more consistent than they are today. 

CM: Looking ahead three to five years, what would “trusted AI” in wealth management look like to you—and what will separate leaders from fast followers?  

BJ: Trusted AI will look less like a chatbot and more like embedded decisioning inside investment workflows, where AI helps firms scale their best thinking consistently across every advisor and client. 

In practical terms, that means advisors can see what’s under the hood, including the data sources, assumptions, and firm guidance shaping recommendations. It also means AI consistently delivers advisor-ready, compliant outputsot just dashboards or insights, but actions advisors can confidently use with clients. At the firm level, AI should be governed like any other investment process, with clear controls, supervision, documentation, and ongoing improvement. 

What will separate leaders from fast followers is how seriously they treat trust. Leaders will view trust as a product capability, not a marketing claim, and will build it through guardrails, auditability, and progressive autonomy. They’ll connect AI to specific business outcomes like capacity, consistency, conversion, and risk management, and measure adoption and impact rather than running scattered pilots. And they’ll embed AI where work already happens, within portfolio systems and CRM workflows, so it drives real behavior change instead of adding another layer of tools. 

Fast followers will continue experimenting. Leaders will operationalize AI in a way that earns trust and reliably turns insight into action. 

Connect

Inside The Story

Brooke Juniper

About Joe Palmisano

Joe Palmisano is Editorial Director for Connect Money, where he brings nearly three decades experience of market insights as a financial journalist, analyst and senior portfolio manager for leading financial publications, advisory firms, and hedge funds. In his role as Editorial Director, Joe is responsible for the selection of content and creation of daily business news covering the financial markets, including Alternative Assets, Direct Investment and Financial Advisory services. Before joining Connect Money, Joe was a financial journalist for the Wall Street Journal, regularly publishing feature stories and trend pieces on the foreign exchange, global fixed income and equity markets. Joe parlayed his experience as a financial journalist into roles as a Senior Research Analyst and Portfolio Manager, writing daily and weekly market analysis and managing a FX and US equity portfolio. Joe was also a contributing writer for industry magazines and publications, including SFO Magazine and the CMT Association. Joe earned a B.S.B.A. in Finance from The American University. He holds the Chartered Market Technician (CMT) designation and is a member of the CFA Institute.

New call-to-action