A rules-based quantitative approach to global equity markets. Built to compound across cycles, not chase them.
Results on this page are updated continuously as the system trades. Last updated: 24 April 2026.
MFAM’s Quantitative Leveraged ETF Strategy is a fully rules-based trading system combining two complementary quantitative approaches, trend-following and mean-reversion, across a basket of leveraged US and Chinese equity ETFs. Every entry, exit, and position size is driven by mathematical rules. There is no discretionary override.
Built on the concepts taught in our free trading course
The trend-following, mean-reversion, volatility-adjusted stops and regime-filter frameworks that drive this strategy are the same concepts taught step by step in the MFAM free trading course. If you want to understand how and why this system works, the course walks through the underlying mechanics in plain language. Access the free trading course here.
Table of Contents
- 1 Historical Performance
- 2 How You Access the Strategy
- 3 Discuss the Strategy With an Adviser
- 4 Alpha vs Beta
- 5 Risk-Adjusted Return Profile
- 6 Why Maximum Drawdown Is the Number That Actually Matters
- 7 The Two Engines Work in Different Regimes
- 8 How It Works
- 9 Instrument Selection
- 10 Every Trade Matters
- 11 Is This Just Curve Fitting?
- 12 Current Status
Historical Performance
The strategy has been run against just over ten years of historical market data covering January 2016 through end-2025. From 2026 onward it is being run forward as a live strategy, with executions accumulating as genuine out-of-sample performance. Over the combined period the equity curve has meaningfully outpaced a passive S&P 500 investment.
The strategy outperformed the benchmark in 8 of 11 calendar years. Outperformance years averaged roughly twice the magnitude of underperformance years, reflecting the system’s asymmetric payoff structure.
Context for this period
Long-run S&P 500 returns sit somewhere in the 7 to 10 per cent range depending on the period measured, averaging around 10 per cent in nominal terms over the past century. That average is made up of decades that look very different from each other, including the 1929-32 crash, the 1968-82 stagflation decade, the 2000-02 dot-com bust and the 2008 global financial crisis.
The 2016 to 2026 window has not looked like those decades. It has instead been one of the strongest ten-year stretches the benchmark has ever had, with a 14.9 per cent annualised return driven by uninterrupted tech-led leadership, fast recoveries from the 2020 COVID shock and the 2022 inflation shock, and an artificial-intelligence-driven capex cycle from 2023 onward. The benchmark is running well above its long-run average and sets a high bar.
The strategy outperformed that elevated benchmark by a meaningful margin. Historically, the strategy’s edge has been largest in years with broad market participation, whether those years were strongly positive (2017, 2019, 2025) or modestly positive (2016, 2020). The strategy underperformed only in years dominated by narrow concentration in a handful of mega-capitalisation names, a pattern seen in 2021 and again in the AI-led 2023 and 2024 markets. Those narrow rallies are historically the exception rather than the norm, so in a future decade closer to the long-run 7 to 10 per cent average, the structural edge is expected to be at least as visible as it was against the far stronger backdrop of the past decade.
No guarantee is made about future performance. The point is that the strategy has beaten a very hot benchmark. A cooler benchmark is historically where this type of system has had its best relative years.
How You Access the Strategy
The strategy is delivered as non-discretionary general advice. MFAM generates the signals and, once the investor authorises each one, the MFAM adviser places the order on the investor’s behalf. The mechanics below are how that works in practice.
Your account, your custody
The investor opens their own account at Interactive Brokers. The account and all cash deposits are legally held by the investor, with client cash sitting in Interactive Brokers’ segregated trust account under Australian and United States client-money rules. MFAM does not hold, pool, or control investor funds at any point. The account is yours, the money is yours, and you can close the account or withdraw at any time without MFAM’s involvement. The MFAM adviser is added to the account as an authorised adviser with trading authority only, so orders can be placed on the investor’s behalf once authorised, but cash cannot be moved out of the account by MFAM.
Signal delivery
General advice trade signals are issued to the investor by email and SMS during the trading day as the rules fire. Each signal is a specific instruction, the instrument, the side, and the size expressed as a percentage of portfolio. The investor only needs to reply yes to authorise. No action from the investor means no trade.
Execution
Once the investor has authorised a signal, the MFAM adviser places the order on the Interactive Brokers account. Orders are typically placed as market-on-open for the next United States session, which executes overnight Australian time. This matches how the strategy has been modelled, so live execution tracks the backtest rather than drifting away from it.
Non-discretionary by design
Every trade requires explicit authorisation from the investor before it is placed. The MFAM adviser does not have discretion to enter or exit positions without a yes from the investor. This is general advice with client-authorised execution, not a managed account and not a managed fund. The investor decides whether any given signal is actioned, and can decline a signal or exit a position at any time.
Leverage is a client choice
The six instruments used are three-times leveraged ETFs, which gives the strategy its headline return profile. The investor can size their exposure to match the risk they want to carry. Fifteen per cent of the portfolio per position is the recommended setting and is what the performance figures on this page are based on. Lower settings are available for investors who want a more conservative profile.
Minimum investment
The strategy is offered from a minimum investment of AUD 50,000. This floor exists so that the costs of running the strategy, including MFAM’s advisory fees for signal generation and adviser-executed trading, remain a proportionate share of portfolio size rather than a meaningful drag on returns. Fee specifics are discussed during the onboarding conversation with an MFAM adviser.
Discuss the Strategy With an Adviser
To step through the mechanics, the paperwork to open the Interactive Brokers account, and whether the strategy is a fit for your portfolio, book a callback with an MFAM adviser.
Prefer to learn the fundamentals first? Access the free trading course.
Alpha vs Beta
The first number most investors look at is return. A 24.1 per cent annualised strategy looks objectively better than a 14.9 per cent benchmark. The raw comparison hides something important though, because not all returns are created equal. A strategy’s return number has to be broken down into two very different components before it can be judged honestly.
Beta is market exposure
Beta is the portion of a strategy’s return that comes from simply being in the market. If the S&P 500 goes up 10 per cent in a year, a fully invested portfolio that moves in lockstep with it will also go up roughly 10 per cent. Nothing clever happened. The market rose, and the investor was along for the ride. Anyone willing to press a single buy button on an index ETF can collect beta. It requires no analysis, no timing, and no discipline.
Critically, beta is not free. It comes with full participation in market losses. The same passive portfolio that captured the upside will sit through the full drawdown when the market falls. The investor has no defence against a bear market. They accept whatever path the market delivers, peaks and troughs alike.
Alpha is return that does not come from market exposure
Alpha is what is left over after the beta portion has been accounted for. It is the portion of return that reflects a genuine edge, whether from timing, selection, risk management, or all three. Alpha is what separates an active strategy from a passive one. When a strategy delivers return that cannot be explained by market exposure alone, and delivers it with risk characteristics that are measurably different from a passive market allocation, that is alpha.
The distinction matters because alpha and beta are valued very differently. Beta is effectively free, available for a fraction of a per cent in management fees through any index ETF. Alpha is scarce, because it requires a source of edge that most market participants do not have. Decades of academic and industry research confirm that consistent alpha is rare, and strategies that can demonstrate it are treated as materially different from strategies that simply ride the market higher.
How this strategy generates alpha
This strategy’s outperformance versus the S&P 500 is not a function of taking more market risk. The opposite is true. Over the 2016 to 2026 window it produced a 24.1 per cent annualised return against 14.9 per cent for the S&P 500, while simultaneously containing its maximum drawdown to 27.8 per cent versus 33.7 per cent for the benchmark. Higher return and lower drawdown at the same time. That combination cannot be produced through beta alone. More market exposure would have produced deeper drawdowns, not shallower ones.
The strategy uses rules to pull capital out of the market when conditions no longer support its trading signals, and to deploy capital more aggressively when conditions do. A passive buy-and-hold investor has no mechanism to do either. They are always fully exposed regardless of whether the environment is favourable. The strategy’s alpha comes from this selective exposure, applied mechanically by the rules rather than discretionarily by a human. The risk-adjusted metrics in the next section are where this alpha becomes visible, and the regime filter that drives the selective exposure is explained further down the page.
Risk-Adjusted Return Profile
Raw return alone understates the quality of the strategy. Risk-adjusted metrics are where the design discipline becomes visible, and where the alpha described above can actually be measured.
On the measures that matter for a leveraged-ETF system, the strategy materially exceeds the S&P 500. Sortino of 1.41 versus 1.05 shows the strategy generates more return per unit of downside risk. Calmar of 0.87 versus 0.44 indicates superior return relative to the worst historical drawdown. Maximum drawdown was -27.8% versus -33.7%, despite the strategy using three-times leveraged instruments.
Sharpe ratio is included for completeness. A strategy built on three-times leveraged ETFs produces larger positive moves by design, and Sharpe penalises those moves even though they are exactly the days the strategy exists to capture. Even so, the combined portfolio’s Sharpe of 1.09 materially exceeds the S&P 500’s 0.87 over the same period. Sortino, which only penalises downside volatility, and Calmar, which measures return per unit of worst-case drawdown, are the more appropriate risk-adjusted lenses for this type of system, and both tell a clear story in the strategy’s favour.
Each of these ratios is expressing the same underlying point. The strategy produced more return per unit of risk taken than a passive S&P 500 allocation did. That is the mathematical signature of alpha. The mechanism that produces it, an intentional and mechanical reduction in market exposure during adverse regimes, is covered in the regime filter section below.
Why Maximum Drawdown Is the Number That Actually Matters
Most investors focus on return. Experienced investors focus on drawdown, because the arithmetic of recovery is unforgiving. A loss and a subsequent gain of the same percentage do not cancel out. The deeper the drawdown, the more disproportionate the recovery required.
A 50% drawdown requires a 100% gain to break even, which at historical equity market return rates takes roughly seven years. A 75% drawdown requires a fourfold return and realistically may never be recovered within an investor’s remaining time horizon. Drawdown is not just a number on a chart. It is a direct tax on future compounding, and in severe cases it ends the compounding journey entirely.
The gap between the strategy’s -27.8% maximum drawdown and the S&P 500’s -33.7% may look modest at first glance, but the recovery arithmetic tells a different story. Recovering -27.8% requires a gain of approximately 39%. Recovering -33.7% requires approximately 51%. That 12 percentage point gap in recovery burden translates directly into years of lost compounding for the benchmark investor, particularly if a second drawdown arrives before the first has been worked off.
This is also why any strategy quoting strong headline returns should be scrutinised for the path it took to get there. Large returns achieved through large drawdowns are statistically fragile. The strategy here is engineered to contain the left tail first and let returns compound as a consequence, rather than the other way around.
Plotted continuously, the strategy’s drawdown profile sits above the S&P 500’s through most of the period. The worst trough was -27.8% versus -33.7% for the benchmark, and recovery back to new highs tends to happen faster. The 2020 COVID shock and the 2022 bear market are both visibly shallower and shorter for the strategy, which is the regime filter cutting exposure during adverse periods rather than riding the full decline.
Historical Drawdown Events in Context
A feature of a leveraged-ETF strategy worth understanding is that its largest drawdowns tend to follow its largest rallies. The three-times leveraged instruments produce outsized spikes during favourable regimes, and any subsequent consolidation is measured against that spike. The headline drawdown number reflects how much of a prior rally was given back, not how much base capital was put at risk.
A second, structural driver of these events is the trend engine’s own design. Trend-following systems are built to capture the bulk of a sustained move, not to exit at the top. The exit signal requires confirmation that the trend has weakened, which by construction occurs after the peak rather than at it. That confirmation requirement is precisely what allows the system to ride extended trends without being shaken out by mid-trend pullbacks. The cost of that patience is that a portion of the final leg of every trend is given back before the exit fires. This giving-back is not the strategy failing. It is the engine behaving exactly as designed, accepting a few percentage points of peak-to-exit slippage in exchange for the ability to hold winning positions through long, profitable runs. A trend system optimised to avoid give-back would also exit much earlier in its winners, sacrificing the tail of outlier returns that the long-tail distribution depends on.
Across 2016 through 2026 the strategy experienced five drawdowns of 20% or more. What the chart below shows is that each successive trough sat above the previous one. Every time the market pushed the strategy into a significant drawdown, the recovery re-established a new floor higher than the last. That pattern of continuously higher lows is the visual signature of a durable uptrend, not a flat or deteriorating equity curve that happens to have good years mixed in.
The pattern repeats at each event. A sustained rally produces a new all-time high, the underlying ETFs reach a regime inflection, and the strategy gives back a portion of the unrealised gain before the regime filter or volatility-adjusted stops cut exposure. Each drawdown is substantial in peak-to-trough terms, but the trough itself lands above the trough of the prior event. Read together, the five troughs form a staircase, not a sawtooth.
- Early 2018 through January 2019 (-23.5% peak to trough). The 2018 “Volmageddon” episode and the Q4 2018 equity sell-off. The trend engine’s regime filter pulled it to cash during the worst of the late-2018 drawdown. This is the first event in the historical window and sets the baseline for the higher-lows sequence that follows.
- Q1 2020 COVID crash (-27.8% peak to trough). An extremely rapid sell-off across equities. Volatility-adjusted stops exited losing positions as realised volatility expanded. This was the deepest drawdown in the entire 2016-2026 window. At the April 2020 trough the curve sat approximately 27% above the January 2019 trough.
- 2022 bear market (-20.3% over nine months). A sustained downtrend through the 2022 rate-hike cycle. The trend engine remained in cash for most of 2022 and the mean-reversion engine carried the book alone. At the October 2022 trough the curve was approximately 93% above the April 2020 trough.
- Q3 2023 yield-spike drawdown (-21.3%). A backup in long-duration yields pressured the leveraged equity ETFs. At the October 2023 trough the curve was approximately 29% above the October 2022 trough.
- Q4 2025 to Q1 2026 (-25.7%). A two-stage sell-off driven by escalating Iran and Middle East conflict, renewed tariff and policy uncertainty, and a sharp momentum unwind through February and March 2026 where the S&P 500 fell 8.9% in four weeks and the three-times-leveraged underliers fell 33 to 43% peak to trough. The strategy added long exposure into the February rally and was stopped out across both engines during the March waterfall. At the March 2026 trough the curve was approximately 50% above the October 2023 trough.
The honest caveat
The higher-lows pattern rewards long-term holders whose cost base sits at an earlier, lower NLV. An investor whose capital enters right at a peak experiences the full peak-to-trough drawdown because their starting NLV is that peak. The cushion visible in the staircase of troughs is a product of compounding across multiple cycles, not of how the strategy behaves in any individual event. Anyone deploying capital into the strategy should size their exposure against the ability to tolerate a 28-30% drawdown from the day they enter, not against the pattern of historical troughs.
The Two Engines Work in Different Regimes
The drawdown containment above is not an accident. It is produced by a specific architectural feature of the strategy, the two engines have uncorrelated exposure patterns. When trend-following conditions are poor, the regime filter pulls the trend engine into cash. When mean-reversion conditions are strong, that engine fills the gap. In practice, leadership switches back and forth depending on what the market is doing.
The 2022 bear market is the clearest illustration. The trend engine was held in cash for almost the entire year, with exposure falling to just 5% of trading days. The mean-reversion engine stayed active, holding positions on 85% of trading days and capturing the relief rallies that defined the 2022 bear. In trending bull markets such as 2017 and 2025, both engines run concurrently at high exposure.
The mechanical consequence of one engine moving to cash is that total portfolio exposure roughly halves. When the trend engine sits out, only the mean-reversion engine carries risk, and effective gross exposure drops to around 50% of a fully engaged book. This is the structural reason the strategy’s maximum drawdown over the historical period was -27.8% versus -33.7% for the S&P 500, despite trading 3x leveraged instruments. The regime filter is not a hedge, it simply removes capital from the market when the environment does not support the trend signal. Drawdown containment, and the alpha that comes with it, follows as a side effect of that discipline.
Why this matters
Running only a trend strategy exposes an investor to years-long periods of poor performance when markets stop trending. Running only a mean-reversion strategy leaves alpha on the table during extended bull runs. Combining the two gives the strategy something to do in every regime, which is why the combined results are meaningfully smoother than either engine alone.
How It Works
The strategy runs two engines in parallel, each trading on different signals across different instruments. The two engines are designed to work in different market regimes so that the strategy as a whole has exposure to both trending and mean-reverting environments.
Trend Engine
Identifies established uptrends using a combination of momentum indicators, moving-average filters, and a regime-detection layer. The engine stays out of the market when broader conditions do not support trend-following. Exits are driven by volatility-adjusted trailing stops that tighten as a trade moves in favour. Position sizing is fixed per instrument.
Mean-Reversion Engine
Identifies short-term oversold conditions in volatile sectors and takes positions sized to capture rebounds. Exits again use volatility-adjusted stops, with discipline around capturing the first material reversion move rather than holding for extended trends.
Two Engines, Two Trade Profiles
The two engines produce very different trade shapes. The trend engine is designed to catch extended directional moves and hold them through a sustained uptrend, which typically means multi-week to multi-month hold times. The mean-reversion engine is the opposite, entering after a short sharp dislocation and exiting as soon as price snaps back, often within a few days. The chart below shows one representative trade from each engine, drawn from the historical record, with entry and exit points marked.
The examples are illustrative, not typical. Every trade behaves differently, and the strategy’s performance is driven by the distribution of outcomes across hundreds of trades rather than any single winner.
Proprietary Signal Calibration
Both engines use a proprietary approach to signal tuning, allowing faster regime detection than conventional moving-average or crossover methodologies. This responsiveness is a core part of the edge and comes from extensive quantitative research rather than any single technical indicator.
For a plain-language walkthrough of the trend, mean-reversion, regime-filter and volatility-stop concepts this strategy is built on, access the free MFAM trading course.
Instrument Selection
The strategy trades six leveraged exchange-traded funds, three on each engine. Instrument selection is deliberate and follows three principles. The first is that each instrument must sit in a market segment whose behaviour matches the engine trading it. Trend legs are placed in markets with a history of long, persistent directional moves. Mean-reversion legs are placed in sectors where sharp dislocations reliably snap back. The second principle is non-overlapping exposure. Each of the six legs tracks a distinct sector or geography, so an adverse event in one does not cascade across the book. The third is liquidity. Every instrument is a top-tier ETF with deep order books, so fills are reliable and position sizing is not constrained.
Leverage is used intentionally. The three-times-leveraged structure amplifies every signal, allowing the system to extract meaningful return from moves that would be uneconomic to trade in unleveraged form. The same leverage is what makes stop-loss discipline and regime filtering non-negotiable, and the strategy is engineered around that requirement.
Trend Engine Instruments
US large-cap technology has produced some of the longest and cleanest trends of any major equity index over the past decade. The constituents are dominated by globally scaled businesses whose earnings trajectories unfold over quarters and years, not days, which gives a trend engine meaningful runway once a move is established.
The semiconductor cycle is one of the most structurally trending sectors in global equities. It combines a multi-year secular growth layer, driven by compute demand, artificial intelligence and industrial electrification, with a cyclical inventory cycle that creates sustained directional moves in both directions. Trend-following captures both.
Chinese equities move on a different set of drivers than US markets, including domestic policy cycles, stimulus rounds and property-sector dynamics. This non-US exposure gives the trend engine an independent opportunity set. When US markets stall, Chinese markets may be trending separately, and the system can capture that without taking additional US risk.
Mean-Reversion Engine Instruments
Defense names move sharply on geopolitical headlines, defense-budget cycles and programme-specific news. The underlying thesis, sustained government spending, is structurally stable, so short-term dislocations caused by headlines tend to reverse quickly. That pattern is a textbook setup for a mean-reversion engine.
Financials are highly sensitive to rates and yield-curve moves, which means they overreact to Fed decisions, inflation prints and individual-bank news. The sector’s overall capitalisation is anchored by large-balance-sheet businesses that do not fundamentally change on a single data release, so overreactions are regularly faded by buyers, producing the rebound the engine is designed to capture.
Healthcare carries two useful mean-reversion features. It is defensive enough to avoid broad-market sell-offs, but contains sub-sectors, biotech and pharma in particular, that produce sharp idiosyncratic moves on trial results and regulatory news. Those moves dislocate the broader sector without changing its long-term trajectory, creating clean reversion setups.
Every Trade Matters
Quantitative strategies rely on statistical discipline. The distribution of individual trade outcomes over the backtest period demonstrates why taking every signal, without filtering, is a structural requirement.
Trade outcomes follow a long-tailed distribution. Most trades produce small positive or small negative returns, clustered near the mean. A small minority, roughly 3% of the total trade count, produced returns in excess of 40%.
Equally important is what happens on the left-hand side of the distribution. The worst single trade over the historical period lost around 33%, and the ten worst losses all clustered between 26% and 33%. There is no fat left tail, no single catastrophic drawdown, no blow-up trade that unwound prior gains. The left side of the distribution is capped, and that cap is deliberate.
Two risk controls produce this shape. The first is stop-loss discipline. Every position carries a volatility-adjusted stop that tightens as a trade moves in favour, so losing trades are exited mechanically at a pre-defined loss rather than allowed to deteriorate. The second is trend-strength fading, where the trend engine progressively reduces exposure as the underlying trend signal weakens. Rather than waiting for a hard reversal, capital is pulled out of the market as conviction fades, which removes positions before adverse moves compound into outsized losses. Combined, these two controls are what allow the strategy to run leveraged instruments without carrying catastrophic single-trade risk, and are the reason the left tail of the distribution stays short.
The concentration of profit in outlier trades
Despite being only 3% of the trade count, the top 10 trades contributed approximately 75% of total strategy profit over the historical period. The top 5 trades alone contributed roughly 45%. Missing even a handful of these large-tail trades would have materially reduced end performance. This is why the strategy takes every signal without discretionary filtering. Post-hoc selection, skipping trades that look low-conviction, would destroy the statistical edge the system depends on.
Is This Just Curve Fitting?
A fair question to ask of any backtested strategy is whether the performance is real or whether the rules have been retro-fitted to the data until the equity curve looks good. This is called curve fitting, and it is the single most common failure mode of quantitative research. A curve-fit strategy will show a beautiful historical track record and then fall apart the moment it is asked to trade on data it has not seen. The numbers on the chart were engineered into existence, not discovered.
Curve fitting is easy to do accidentally. Any strategy has parameters, and any parameter can be tuned until the historical result is maximised. If you try enough combinations on the same dataset, something will fit that dataset almost perfectly by coincidence alone. The fit says nothing about the future because the rules were selected for that specific history, not for any underlying market behaviour. A properly engineered quantitative strategy has to be built in a way that makes this kind of overfitting structurally difficult.
How to tell curve fitting from a real edge
The cleanest test is a parameter sweep. Take the finished strategy, vary its core parameters over a wide range, and plot every resulting equity curve. If the strategy’s performance collapses when the parameters move even slightly, the historical result was a lucky coincidence at one specific parameter setting. If the curves instead form a tight cluster, with every variant producing similar shape and similar ending value, the edge lives in the underlying rules, not in the specific numbers chosen.
A well-built strategy has to be robust on both sides of every trade, the entry and the exit. Sweeping only one side leaves the other side unexamined. Both sides are tested independently below.
Exit-side robustness, the volatility-stop sweep
The exit side of the strategy is governed by volatility-adjusted stops. Each position is given a stop whose distance scales with how noisy the underlying instrument currently is, so a calm market produces tight stops and a noisy market produces wider ones. The question is whether the edge depends on the specific stop multiples chosen, or whether it holds across a range of reasonable values.
Every one of the twenty-five exit-parameter combinations beats the S&P 500 by a meaningful margin, and every curve follows the same overall shape through every drawdown and recovery. The cluster is tight, with ending values sitting in a narrow band relative to the total compounded return. There is no single knife-edge volatility-stop setting propping up the result. The exit logic works across the full plausible parameter space.
Entry-side robustness, the signal-length sweep
The entry side of the strategy is governed by two timing signals, one for the trend engine and one for the mean-reversion engine. Each signal has a lookback window that controls how sensitive it is. A shorter window produces more entries and more noise. A longer window produces fewer entries but waits for more confirmation. The second sweep varies both lookback windows independently to test whether the edge survives different signal sensitivities.
Again, every combination beats the benchmark by a meaningful margin and every curve retains the same overall path. The entry logic is not hanging on one specific lookback value. Whether the lookback windows are shorter or longer than the deployed configuration, the strategy still produces a similarly shaped equity curve ending materially above the S&P 500. Entry and exit are independently robust, which is the stronger test because it rules out the possibility that one side of the system is carrying the other.
Plateau selection, why the deployed setting is not the peak
A common and valid question when looking at a parameter grid is whether the deployed configuration was simply picked as the highest-performing cell. If the development process had done that, the deployed setting would be sitting at the top of a narrow spike, and any small move away from it would collapse performance. That is the curve-fitting failure mode a sweep is designed to detect.
The selection method used here was different. The grid was inspected across multiple planes, with return stability and drawdown stability both weighing equally into the decision. The goal was not to find the single best cell on any one metric. It was to find a cell whose neighbours produced similar returns and similar drawdowns, so that a small parameter miscalibration in live trading would not change the character of the results. Both planes were considered together, and the deployed setting was chosen from the region where the neighbourhood was consistent on both.
The heatmap below shows the profit factor plane of the same 5 by 5 entry-parameter grid, with the actual parameter values anonymised as x and y. Profit factor is the ratio of gross profits to gross losses, so any cell above 1.0 is net profitable. It is the cleanest way to visualise whether the edge survives away from the deployed configuration.
The key feature is that every cell in the neighbourhood is green. Profit factor ranges from roughly 1.45 to 1.59 across the entire grid. A curve-fit strategy would show one green cell surrounded by red, with profitability collapsing as soon as any parameter moved. Instead, the whole neighbourhood is comfortably profitable. The edge is not sitting on a knife-edge parameter choice, it is being generated by the underlying rules and survives in every direction around the deployed setting.
The same broad stability shows up on the max-drawdown plane when it is inspected alongside this one. Drawdown is not uniform across the whole grid, but it clusters tightly in the region the deployed configuration was picked from, which is why both planes were weighed together at selection time rather than either one in isolation.
How the strategy was developed
The development process itself was designed to resist curve fitting, in three deliberate stages.
Stage one was building the base logic on unleveraged instruments. The trend and mean-reversion engines were designed and tested against one-times ETFs tracking the same underlying exposures, where signal behaviour is cleaner and leverage-related decay does not distort the data. The goal in this stage was to establish whether the underlying trading rules captured real market behaviour, independent of any amplification. If the rules did not work on the unleveraged instruments, they were not going to work anywhere.
Stage two was validation on data the rules had never seen, using the 2022 to 2025 window as the held-out period. The engines were built and their parameters chosen on the 2016 to 2021 window. Once the base logic performed on that development set, it was run forward against the 2022 to 2025 window, which had been deliberately held out of the build. If the rules had been curve-fit to the development data, the out-of-sample performance would have fallen apart. It did not, which is what gave confidence that the edge was structural rather than accidental. This is the split shown by the dotted lines on the main equity curve earlier on the page.
Stage three was calibrating for the three-times leveraged instruments actually traded live. Leverage changes the risk profile materially. During the 2022 to 2025 held-out window, the volatility-stop multiples and regime-filter thresholds were re-tuned to accommodate the faster, larger moves that three-times ETFs produce. This calibration did not change the underlying logic. It adjusted the risk-taking envelope so the same rules would operate safely on instruments that amplify every move by a factor of three. The rules themselves were not refit to the 2022-2025 data, only the risk-sizing parameters were recalibrated for leverage.
This staged process is why both parameter sweeps fan out tightly rather than collapsing when the parameters are varied. The underlying logic was validated before leverage-specific tuning was applied, so the strategy is not reliant on any specific choice of entry length or stop multiplier for its edge.
What this does not prove
A clean parameter sweep and a disciplined development process do not guarantee future performance. What they do is rule out the most common reason backtested strategies fail in live trading, which is that the rules were fit to the history rather than discovered from it. The edge here rests on the behaviour of the instruments and the market regimes traded. If those behaviours change materially, the strategy will change with them. What the sweep shows is that within the historical window, the edge was not an artefact of parameter choice.
Current Status
Performance figures on this page are a combination of two periods. The 2016 to 2025 portion is historical simulation constructed from actual market data using the strategy’s current rules. The 2026 portion onward is the strategy running forward as a live system, with executions accumulating as genuine out-of-sample performance. Both segments are shown continuously on the same equity curve, with a marker indicating the boundary between the historical simulation and the live forward period.
General Advice Warning
The information on this page is general in nature and does not take into account your individual objectives, financial situation, or needs. Before acting on any information presented here, you should consider its appropriateness having regard to your own circumstances, and where relevant consider the Product Disclosure Statement for any financial product referred to.
Backtested Performance Disclosure
All performance data presented on this page is backtested and hypothetical. Backtested results are constructed with the benefit of hindsight using historical market data. They do not reflect the friction of actual trading, including but not limited to execution slippage, capacity constraints, fills timing, adverse selection, changes in market structure, or changes in the behaviour of underlying instruments over time. Backtested performance is not an indicator of what actual trading would have produced, and should not be interpreted as a forecast of future performance.
Portfolio Construction
The backtest is run as a single shared pool across both engines, starting at USD 200,000. Each of the six legs is sized at 15% of the combined portfolio NLV at the time a signal fires, meaning gross exposure tops out at approximately 90% when every leg is concurrently engaged. In live deployment the two engines run in separate Interactive Brokers accounts that are periodically rebalanced to 50/50 so the position-sizing base is preserved.
Commissions and Fees
Returns shown on this page are net of Interactive Brokers base commissions at the published IBKR Pro Fixed schedule for US equities (USD 0.005 per share, minimum USD 1 per order, capped at 1% of trade value). They do not include MFAM adviser fees, platform fees, spreads, borrow, or any tax. Actual client outcomes will be lower than the gross-of-adviser-fee figures shown here.
Past Performance
Past performance, whether actual or hypothetical, is not a reliable indicator of future performance. Live trading results may differ materially from backtested results.
Leveraged Instruments
The strategy trades leveraged exchange-traded funds. Leveraged ETFs carry materially higher risk than their unleveraged counterparts and can experience significant decay in volatile or sideways markets. They are not suitable for all investors and are generally inappropriate as long-term buy-and-hold investments. The strategy’s rules-based approach seeks to manage this risk, but cannot eliminate it.
No Guarantee
MF & Co. Asset Management makes no representation or guarantee regarding the future performance of the strategy. Returns may be negative. You may lose capital.
About MFAM
MF & Co. Asset Management Pty Ltd holds an Australian Financial Services Licence (AFSL). This page has been prepared for general information purposes. To discuss whether the strategy may be appropriate for your circumstances, speak with an MFAM adviser.
