Artificial General Intelligence needs a world model and Generalized Machine Learning, a novel approach to machine learning.

Machines and the Sisyphean Paradigm

“The Sisyphean Paradigm” refers to a concept inspired by the Greek mythological figure Sisyphus, who was condemned to eternally push a boulder uphill only to have it roll back down before reaching the top. The Sisyphean paradigm is often used metaphorically to describe a situation or task that is repetitive, laborious, and seemingly endless.

In a broader context, the Sisyphean paradigm can represent any activity or endeavour characterized by persistent and futile effort. This is where progress is continuously negated by circumstances or inherent limitations. It suggests a sense of existential struggle, with individuals or societies caught in a cycle of repetitive tasks or challenges that offer no lasting fulfillment or resolution.

The concept has been applied to various domains, including philosophy, literature, psychology, and human existence discussions. It is often invoked to explore themes of meaninglessness, the human condition, and the nature of perseverance in the face of adversity or seemingly insurmountable obstacles.

9 out of 10 asset managers don’t beat the market over a 10-year period. If discretionary managers cannot do it, can machines do it? The answer is not a linear translation. Machines amplify human biases. And biases are complex. They come and leave. There is nothing permanent about biases and they come in different shapes and forms which are not intelligible to the average person. To understand how variability can become undetectable, one can look at the AIDS virus which went undetected because it has high genetic variability. In simple words, it changed shape and form. Machines must be trained in understanding statistical biases, not human biases before adopting a meta-strategy platform.

Erroneous factors

All factors work and fail temporally and if the Size factor could be a proxy every other factor could be a function of something that is intrinsically changing, like information. Historically society has attempted to focus on such erroneous factors due to the comfort of causality and intuitive logic. Refusing to accept that correlation does not imply causation strengthens bias and amplifies the error.

Arbitrage Pricing Theory (APT) (Ross, 76) defines substitutes to sell at the same price using a simple return metric. If two financial assets offer the same financial returns they can be compared. Ross’s substitute idea remains valid however his systematic factor remains uncertain. Ross says it is driven by unanticipated factors because what you cannot anticipate is what you need to manage. It’s a circular argument. On one hand, systematic factors cannot be anticipated. However, on the other hand, sensitivities related to those factors can assist asset managers and organizations in managing large pools of funds. Ross defined sensitivities as the responses of asset returns to unanticipated movements in economic factors.

“But what are these factors?” If we knew them, we could measure directly the sensitivities of individual stocks to each. Unfortunately, this is much harder than it sounds. To begin with, any one stock is so influenced by idiosyncratic forces that it is very difficult to determine the precise relationship between its return and a given factor. The biggest problem in sensitivities measurement is separating the unanticipated from the anticipated factor movements. We should include both anticipated and unanticipated changes when only the latter is relevant…The market index should not be ignored, but neither should it be worshipped. It is simply helpful as a landmark on the horizon…Recent empirical evidence has shown unequivocally that most commonly used market indices are not optimized portfolios. Under this condition, the CAPM beta is not even a reliable indicator of expected return and, as we have already seen, it is virtually worthless as a measure of the type of risk to which the portfolio is exposed…There are several important factors and if all of them were are not represented then our understanding of how the capital market works is inadequate.”

Ross emphasizes the need for understanding systematic factors (inflation, industrial production, risk premiums, and interest rates). While he underplays the idiosyncratic factors, he still suggests that “buying the market” is simply wrong. Though APT lost user-friendliness among adopters over time, the factor debate continues to this day. The industry is still waging the “my factor is better than your factor” battle while losing the α war. Moreover, APT’s law of one price has a temporal limitation as markets interact on a multi-duration continuum.

Jacod and Yor (1977) turned ‘financial economics’ into ‘mathematical finance’. All questions are posed in purely mathematical terms, and no economic principles without a precise mathematical statement appear. APT assumptions are domain-specific, subjective, and without precise mathematical definitions. Boussinesq (1897) taught Bachelier about heat equations, which he combined with probability and knowledge of stock exchanges to create the mathematical theory of speculation, the foundation stone for modern finance as we know it today. A sense of mathematical generality evades the market, as it reels under speculative swings.

APT vs. CAPM vs. Three Factor Model have performances that vary with the period under test. The models are inconsistent, challenged, and preferred because of more or fewer assumptions or risk preference choices. Fama’s unequivocal mention that Size premium underperforms for extended periods is unexplained. Factor investing is not a science. Factor failure is a reality. Factor successes are short-lived and temporary. Unanticipated forces beyond APT’s systematic factors can reject and accept a factor. Risk premiums can be explained probabilistically. Causal explanations of risk premiums add to the market’s inefficiency. Fama builds his case by challenging SLB. While behavioral finance builds its factor (psychology) case by challenging EMH. Prescott (2008) wrote that “Raj and I concluded that equity premium had to be for something else…our finding was that non-diversifiable risk accounted for only a small part of the difference in historical average returns”.

Generating statistical significance is easy. 200 to 300 variables typically generate two to three statistically significant factors, even if there is no true underlying relationship. So the more variables in a model, the more likely the chance of a few variables conjuring up meaning, when it is not there. (Alternative Investments, An Allocator’s approach, Chambers, Kazemi, Black, Wiley 2021)

Campbell, Goodhart, and Factor timing

A statistician trained in economic history and economic cycles will effortlessly point to Campbell and Goodhart’s (C&G) law of failure. He will also reiterate that cycles are like indicators, destined to fail. Confirmation of a pattern does not eliminate its failure probability. Whatever significance we give to our variables, they are destined to disappoint us, stagnate, and eventually become useless. Indicator and variable failure is reality. And there is one variable that will never go away: noise, chaos, turbulence, uncertainty, etc. Hence feature importance analysis is like the story of blind men and elephants. The elephant can be only found if the blind deal with the unobservable and unexplainable.

This is why when you find important features and conduct experiments, you should know that eventually these features will stop functioning in some specific environments. After that some regimes will shift and the importance of the features will diminish further over time. Flagging predictability and applicability will negatively influence features in other asset classes. And in the end, a feature importance that started as a research tool will look like a failed backtest cycle. The moment you put it in the application, it stops working. This is why the science experiences a replicability crisis. The results are valid until you publish them as a research paper. But once the results are articulated, the behaviour changes. The research is fruitless.

Similarly Asset Returns are non-normally distributed but the moment you put them to test a momentum crash destroy the persistence. The idea of unpredictable fluctuation is as the name suggests, unpredictable. And even if we assume that an indicator is not jinxed by C&G, factor timing is impossible. The spread will diverge longer than you are solvent. The system is rigged for consistent anticipation. Timing is impossible.

Generalized Factor

Observable and explainable variables play a crucial role in understanding complex systems. However, it’s imperative to acknowledge that not all variables fit neatly into these categories. There is always a chance for a variable to be neither observable nor easily explainable, introducing uncertainty and noise into the equation. This noise complicates anticipation and prediction. It reminds us that even with extensive data and explanatory models, the future remains uncertain. Embracing this uncertainty requires humility, adaptability, and a willingness to update our understanding as updated information emerges. Navigating the intricacies of unpredictable variables is an ongoing challenge in many fields, demanding constant vigilance and a nuanced approach to decision-making.

In the context of observability, explainability, and failure, modelling noise or the absence of it requires understanding the inherent uncertainties in the system. Noise refers to random fluctuations or unexplained variations in data, while signal represents the meaningful information we aim to extract.

To model noise, statistical techniques can be employed. These techniques analyze patterns, trends, and correlations in the data to identify the underlying signals. However, it’s critical to recognize that even with sophisticated models, the estimated signal is never absolute. It is inherently probabilistic, providing a range of possible outcomes rather than a definitive answer.

When time is added as a variable to the model, the probabilistic estimate of the signal can change. This is because the system may evolve, new data may become available, or our understanding of the underlying factors may improve over time. As a result, the estimated probabilities associated with different outcomes may shift as we gather more information.

For a lay reader, think of it this way: Imagine you’re trying to predict the weather for the upcoming week. You have historical data, forecasting models, and meteorological knowledge to guide you. However, due to the complexity of weather patterns and the inherent uncertainties involved, your prediction is not a guarantee of what will happen. It’s an estimate based on available information, and as time progresses and new data becomes available, your estimate may change. The same applies to other fields where uncertainties and variables come into play. Probabilistic models help us make informed judgments while acknowledging absolute certainty limitations.

Hence a generalized factor, let’s call it the “X factor,” (f) can fluctuate between being relevant and irrelevant, applicable and redundant, anticipated and resulting in failure. We can think of this X factor as an idealized Markov chain.

Markov chain is a mathematical model that represents a system. It is where the future state of the system depends only on its current state, and not on how it arrived at that state. In our case, the X factor can be in different states (e.g., relevant or irrelevant) and can transition between these states based on certain probabilities.

For example, the X factor may be relevant and applicable in one situation, but as circumstances change, it can become redundant or irrelevant. Similarly, we may anticipate the X factor having a certain impact. However, due to unforeseen factors or changing conditions, it can result in failure or have a different outcome than expected.

Importantly, these transitions in the X factor’s relevance, applicability, anticipation, or failure can also be influenced by time. As time progresses, the X factor may evolve or change its state. Changing information, events, or conditions can impact its relevance or applicability, leading to different outcomes as time unfolds. [N] is the normal state while [NN] is the non-normal state.

In essence, viewing the generalized factor as an idealized Markov chain means recognizing that it can dynamically shift between states. These transitions can be influenced by time and various factors. This understanding highlights the complexity and uncertainty of the X factor. It emphasizes the need for adaptable and flexible approaches when dealing with such variables. The X factor can also be seen as a composite, multiple factor at work.

When we view factors as dynamic systems, they can exhibit various states or conditions over time. Rather than being fixed in a single state, a factor can exist probabilistically in multiple states simultaneously.

To illustrate this, imagine a factor called “A” that has two states: State 1 and State 2. Instead of being exclusively in one state or the other, factor A can have a certain probability of being in State 1 and another probability of being in State 2 at any given time. These probabilities represent the likelihood or chance of the factor being observed in each state.

Since a factor can be in multiple states simultaneously, the total probability across all possible states must add up to 1. In other words, the overall probability of the system is always 1 because it encompasses all the potential states that the factor can occupy. This principle holds true due to the closed nature of the system, meaning that all possible states are accounted for within the system’s boundaries.

By considering factors as dynamic systems and understanding their probabilistic nature, we can account for inherent uncertainties and variability in the system. This perspective allows us to capture the complexity of factors and their ability to exist in multiple states simultaneously. It also ensures that the total probability of the system remains constant and comprehensive. This generalized factor can express many statistical distributions but not at the same time.

Generalized factor as a Markov chain

Probabilistic states

Since statistical distributions can be classified on the boundaries of normal and non-normal this probabilistic system expressed both characteristics of mean reversion and the failure of mean reversion which led to extremities, fat tails i.e. non normal behavior. Please expound on this.

The concepts of mean reversion and the failure of mean reversion are relevant when considering statistical distribution behavior and their relationship to extremities and fat tails.

Mean reversion refers to the tendency of a variable or system to move back toward its average or mean over time. In other words, when a variable deviates significantly from its mean, it is expected to eventually revert to its average value.

However, in some cases, the mean reversion may fail or be incomplete. This can occur due to various factors such as structural shifts, extreme events, or persistent trends. When mean reversion fails, it can lead to behaviours that deviate from normal distribution expectations.

One consequence of mean reversion failure is the occurrence of extremities or outliers. These are observations that fall in the tails of the distribution and are far away from the mean. In a normal distribution, the tails are relatively thin, indicating that extreme values are less likely to occur. However, when mean reversion fails, extreme events or values can become more frequent, resulting in fat tails in the distribution.

Fat tails refer to a distribution that has more observations in the tails than expected in a normal distribution. This indicates a higher probability of extreme values or events than normal distribution predictions. Fat tails are often associated with non-normal behavior and can have implications for risk management, as they imply a higher likelihood of extreme outcomes than a normal distribution.

Mean reversion failure can lead to non-normal behavior in statistical distributions, including the occurrence of extremities and fat tails. These phenomena highlight the need to consider and account for extreme events and non-normal behavior when analyzing and modelling real-world data.

The terms “rich-get-richer” and “poor-get-poorer” describe certain dynamics within a system, particularly about the distribution of resources or wealth. These dynamics can lead to non-normal behavior within specific distributions. However, it’s worthwhile to note that normality or non-normality is a statistical property of the distribution as a whole, rather than an inherent property of individual states or phenomena within the distribution.

In statistical terms, normality refers to a specific mathematical distribution called the normal distribution or Gaussian distribution. It is characterized by a symmetric bell-shaped curve. Non-normal behavior refers to departures from this bell-shaped pattern.

The rich-get-richer and poor-get-poorer phenomena can lead to deviations from a normal distribution. They can cause power law distributions, where a small number of highly successful individuals or entities hold a disproportionate amount of resources or wealth. These power law distributions are characterized by long tails, indicating a higher probability of extreme events or values.

Therefore, we can say that rich-get-richer and poor-get-poorer dynamics are associated with non-normal states within a distribution. These non-normal states manifest as extreme values or events at the tail ends of the distribution. However, it’s essential to remember that the overall distribution itself can still exhibit characteristics of normality or non-normality, depending on its shape and statistical properties as a whole.

In summary, the rich-get-richer and poor-get-poorer phenomena can contribute to non-normal behavior within specific distributions, characterized by power law distributions with long tails. However, normality or non-normality is a statistical property of the entire distribution, and individual states or phenomena within the distribution can exhibit non-normal behavior without necessarily defining the entire distribution as non-normal. In the 3N model of life and the 3N method, the author explains how these states function.

The probabilistic states of a factor

Idealized risk

Now imagine this dynamic factor in an idealized market. There are two opposing forces at play. On the one hand, there is the force of reversion, which tends to pull things back toward a balanced state. On the other hand, there is the force of diversion, which causes things to deviate from that balanced state.

In this idealized scenario, both reversion and diversion have equal strength, meaning they pull equally from both sides. This creates a situation where the market is in constant flux, moving between states of convergence and divergence.

Risk arises from the unpredictability of market behavior due to these opposing forces. Since reversion and diversion are both influential, it becomes difficult to anticipate market direction with certainty.

Market fluctuations introduce uncertainty and make it challenging to make accurate predictions. As a result, investors and market participants face increased risk because they cannot rely on a straightforward pattern or trend. The balance between reversion and diversion creates an element of unpredictability that adds to the risk factor.

In such a scenario, investors must carefully analyze the market and take a long-term perspective. By understanding the interplay between reversion and diversion, investors can manage their risk by diversifying their investments. They can also adopt a patient approach that accounts for the inherent uncertainties of the dynamic factors.

In this idealized scenario reversion is the force that pulls things back towards a balanced state, while diversion is the force that causes deviations from that balanced state, creating imbalance and hence a perpetual state of motion.

Now, let’s consider the metrics associated with this idealized state. The RD factor represents the interplay between reversion (R) and diversion (D), and distributes an even probability across four sub-factors. Beta, which is the summation of all sub-factors, measures overall market risk or exposure.

Within this framework, we can illustrate the concept of Alpha and Smart Beta. Alpha (⍶) represents positive selection or outperformance and is the combination of rich-get-richer (RGR) and poor-get-richer (PGR) phenomena.

On the other hand, Alpha Prime (⍶’) is the inverse of Alpha and consists of the set of all poor selections or underperformance. It is also known as Dumb Beta.

In this idealized state, risk stems from the dynamic nature of the RD factor and the interplay between reversion and diversion. The balance between these forces creates constant fluctuation in the market, making it challenging to accurately predict its direction.

This unpredictability increases risk for investors and market participants. The market’s behavior is influenced by both positive selection (Smart Beta) and the negative selection (Dumb Beta), which can vary over time.

Excess returns (Alpha) exist due to dynamic fluctuations. However, the risk lies in accurately predicting Smart Beta and Dumb Beta probabilities.

Investors need to be aware of this risk and approach the market long-term. By diversifying their investments and adopting a patient, passive approach, investors can minimize the probability of being negatively impacted by Dumb Beta. In addition, they can maximize the probability of capturing Smart Beta benefits.

In summary, the risk in this idealized state arises from the dynamic nature of the RD factor. It also arises from the interplay between reversion and diversion, and the unpredictability it introduces. Understanding these dynamics and taking a long-term, passive investment approach can help manage risks.

The idealized market and risk

Markov chains as a dynamic system

A Markov chain can indeed be illustrated as a dynamic system. Markov chain is a mathematical model that describes a sequence of events or states. In this model, the probability of transitioning from one state to another depends only on the current state and not on past history. It can be represented as a directed graph, where the states are represented by nodes, and the transitions between states are represented by directed edges.

When illustrating a Markov chain as a dynamic system, we can visualize transitions between states over time. Starting from an initial state, the system moves from one state to another based on the transition probabilities. This movement between states can be represented by a sequence of snapshots or animations. This shows the system’s progression throughdifferent states as time passes.

By representing the Markov chain as a dynamic system, we can gain insights into the system’s behavior and evolution over time. We can analyze properties such as convergence to equilibrium, recurrent patterns, or long-term behavior. Visualizing the dynamics of a Markov chain can enhance our understanding of its probabilistic nature. It can also provide a visual representation of how the system evolves through different states over time.

Stationarity, Memory, and Non-linear Mechanisms

Yes, it is generally true that making data stationary can effectively remove or reduce memory or temporal dependencies.Stationarity refers to a property of data where statistical properties, such as mean and variance, remain constant over time or across different subsets of the data.

When data exhibit non-stationarity, it implies that the statistical properties of the data change over time, and there may be temporal dependencies or trends present. In such cases, the data may exhibit memory, meaning that the current value of a data point may be influenced by its past values.

By transforming non-stationary data into stationary data, you remove the time-dependent patterns and dependencies. This can be achieved through techniques like detrending, differencing, or transforming the data using mathematical operations. The goal is to eliminate trends, seasonality, or other time-varying patterns, making the data stationary.

Once the data is made stationary, the statistical properties become constant over time, and any memory or temporal dependencies are eliminated or significantly reduced. This is often desirable in statistical modeling and analysis because it simplifies the data and allows for the application of various modeling techniques that assume stationary.

However, it’s important to note that the process of making data stationary and removing memory is not always appropriate or necessary for all types of data analysis. In some cases, preserving temporal dependencies or memory in the data is paramount for accurate modeling and analysis. The decision to make data stationary or retain memory depends on the specific context and goals of the analysis.

Stationary data, by definition, should not possess time-dependent patterns or trends. However, there may be scenarios where the data appears stationary but still retains memory or temporal dependencies. This can occur under the following conditions:

  1. Despite being stationary, the data may still have memory if they exhibit repeating patterns or cycles. For example, in seasonal time series data, observations may show regular fluctuations over specific time intervals, indicating seasonal memory.

  2. Autoregressive Processes: Even stationary data can exhibit memory due to autoregressive processes. Autoregressive models incorporate lagged values of the variable itself as predictors, which introduce memory or temporal dependencies. These models capture the relationship between current and past values and can retain memory even in stationary data.

  3. Long-Term Dependencies: Certain types of data may possess long-term dependencies, where past observations influence immediate or nearby values. These dependencies can persist even in stationary data, indicating memory over longer time scales.

  4. Non-linear Dependencies: Stationarity assumes constant statistical properties over time. However, if the data exhibit non-linear dependencies, such as complex interactions or feedback mechanisms, memory can be present even in stationary data.

Under these conditions, although the data may be considered stationary based on statistical properties like mean and variance, there are still underlying patterns, cycles, autoregressive processes, or long-term dependencies that introduce memory or temporal dependencies.

Universality and generalization

“Universal” in the context of a mechanism typically implies its ability to generalize across different domains, contexts, or data sources. A universal mechanism captures underlying patterns and relationships that hold true across diverse settings. This allows it to apply its knowledge and insights to new and unseen data.

Generalization refers to the ability of a model or mechanism to perform well on data it has not been directly trained on. It means that the learned knowledge and patterns from the training data can be effectively applied to make accurate predictions or decisions about unseen or future data points.

Machine learning requires generalization. It ensures that the model or mechanism can handle new data instances or scenarios beyond the training data, making it applicable in real-world settings.

A universal mechanism, which can generalize its learned knowledge across different domains, asset classes, regions, or instruments, aims to capture and leverage fundamental principles or patterns that are broadly applicable. This universality enables the mechanism to adapt and perform well in diverse contexts, even without an extensive amount of specific training data in each context.

However, it’s paramount to note that while a universal mechanism may have the potential for generalization, the actual success of generalization depends on various factors. These factors include the quality and diversity of the training data, the complexity of the problem, and the effectiveness of the learning algorithms employed.

Backdating vs. Backtesting

Universality, in the context of differentiating between backdating and backtesting, may refer to the ability of a mechanism to capture and adapt to dynamic data and market conditions.

Backdating refers to the practice of assigning historical data or information to a specific point in time. This is to simulate a mechanism or strategy’s performance or behavior. It involves applying a mechanism retrospectively to past data to assess its potential effectiveness.

Backtesting, on the other hand, involves testing a mechanism or strategy on historical data to evaluate its performance. It also makes projections about its future behavior. It involves simulating trades, portfolio allocations, or decision-making processes based on historical data to assess the potential profitability or risk associated with the mechanism.

Universality can help differentiate between backdating and backtesting because a universal mechanism adapts and generalizes across different data sources and periods. It can effectively handle the dynamic nature of data and market conditions, allowing for more accurate backtesting and projections into the future.

By incorporating universal principles and patterns that hold across various domains and periods, a universal mechanism can provide more reliable insights and predictions during backtesting. It can adapt to changing market dynamics and capture essential factors that drive performance, allowing for a more realistic assessment of its potential effectiveness.

In contrast, a mechanism that lacks universality may struggle to adapt to new or unseen data, leading to unreliable backtesting results. It may fail to capture critical patterns or account for market conditions changes, leading to inaccurate projections and misleading conclusions.

Therefore, the universality of a mechanism can play a significant role in differentiating between backdating and backtesting by ensuring that the machine can effectively capture and adapt to the dynamic nature of data and market conditions, leading to more accurate and reliable assessments of its performance.

A universal process would therefore be computationally light and achieve higher results without relying on too much data. This will avoid other data challenges like backtest overfitting, cross-validation leakage, fixed time horizon labeling, etc.

Generalized ML

A generalized factor based on a stationary ranking, operating as Markov chains, with the probabilities of these factors reaching a stage of equilibrium. This lays the foundation for a generalized machine learning process that does the following.

  1. Ranking Data: Gather data that represents the stationary ranking of the factors over time. An analysis of the data should reflect the relative order or importance of the factors and their evolution over time.

  2. Stationary Ranking Representation: Define and represent the stationary ranking states of the factors. Each state represents a specific order or arrangement of factors based on their importance.

  3. Transition Matrix Estimation: Estimate the transition probabilities between stationary ranking states based on observed data. Analyze the frequency of transitions and compute probabilities. In this case, the transition probabilities would represent the likelihood of changes in the relative importance of the factors.

  4. Markov Chain Model: Utilize a Markov Chain Model to capture the stationary ranking behavior of the factors. The model’s states represent the different stationary rankings, and the transition probabilities reflect the probabilities of transitioning between the rankings.

  5. Training and Inference: Train the Markov Chain Model using ranking data to estimate model parameters, including transition probabilities. Once trained, the model can infer and predict the most likely sequence of stationary rankings given the observed data.

  6. Model Validation: Evaluate the Markov Chain Model’s performance using appropriate evaluation metrics, such as log-likelihood or perplexity. Ensure that the model accurately captures stationary ranking behavior and predicts the relative importance of the factors.

  7. Model Deployment: Deploy the Markov Chain Model to predict updated data. Monitor the model’s performance and verify that it continues to align with the observed stationary ranking behavior.

When the factors are based on a stationary ranking and can be modelled as Markov chains with equilibrium probabilities, the generalized machine learning process focuses on capturing and modelling the transitions between stationary ranking states. The Markov Chain Model provides insights into the relative importance of factors and enables predictions based on stationary rankings.

Conclusion

Machine-based generalized learning is based on generalized factors that operate as a part of the mechanism, exhibit universality, are data and computation light and can answer a wide range of problem questions compared to the limited scope of conventional factors driven by human bias. In the world of generalized ML, conventional semantic LLM is more appropriate for the re-education of global investors so that they can unlearn the knowledge that comes with poor assumptions about information, etc. Generalized Machine Learning will eventually take over Granger’s 1992 challenge, understand the complexity and strange attractors and explain why chaos is connected to the number 3, eventually relegating the 9 out of 10 asset managers underperformance statistic to redundancy.

Bibliography

[1] Matia, Kaushik and Pal, Mukul and Stanley, H. Eugene and Salunkay, H., Scale-Dependent Price Fluctuations for the Indian Stock Market. EuroPhysics Letters, Aug 2003 [2] M. Pal, M. Shah, A. Mitroi, Temporal Changes in Shiller’s Exuberance Data, SSRN, Feb 2011

[3] M. Pal, Mean Reversion Framework, SSRN, May 2015

[4] M. Pal, Markov and the Mean Reversion Framework, SSRN, May 2015

[5] M. Pal, Momentum and Reversion, Aug 2015

[6] M. Pal, What is Value, SSRN, Sep 2015

[7] M. Pal, M. Ferent, Stock Market Stationarity, SSRN, Sep 2015

[8] M. Pal, Reversion Diversion Hypothesis, SSRN, Nov 2015

[9] M. Pal, How Physics Solved your wealth problem, SSRN, Oct 2016

[10] M. Pal, Human AI, SSRN, Jul 2017

[11] M. Pal, The Size Proxy, Aug 2017

[12] M. Pal, The Beta Maths, SSRN, Mar 2017

[13] Maureen, O. Bhattacharya, A. ETFs and Systematic Risk. CFA Research Institute, Jan 2020

[14] M. Pal, [3N] model of life, SSRN, Apr 2021

[15] M. Pal, The S&P 500 Myth, SSRN, Jul 2022

[16] M. Pal, The Snowball Effect, SSRN, Jul 2022

[17] M. Pal, Mechanisms of Psychology, SSRN, Jun 2022

[18] M. Pal, The [3N] model of life, SSRN, Feb 2023