Weak Artificial Intelligence (AI) relies upon three fundamental assumptions. Firstly, it posits that greater computational power leads to heightened intelligence since the challenge of data processing has diminished compared to a decade ago. Secondly, it acknowledges the presence of human biases that permeate data curation, thereby amplifying the inherent biases within the data. Unfortunately, the prevailing belief suggests that these biases are irremediable, resulting in a tragic outcome for all. Thirdly, AI assumes that the realm of science lies outside its purview and does not pose a significant challenge.

The Computational Assumption

It is vital to recognize that computation alone does not equate to intelligence, as discerned from real-world examples. Consider the intricacies of chess, for instance. While computers possess the capacity to perform billions of calculations per second, it took decades for them to surpass human chess grandmasters. This serves as a testament to the fact that true intelligence encompasses strategic thinking, pattern recognition, and long-term planning, extending beyond sheer computational power.

Language understanding and translation offer another illustration. Despite significant advancements in machine translation systems, nuances, context, and idiomatic expressions continue to pose challenges that humans effortlessly comprehend. Language intelligence necessitates not only computational processing but also semantic understanding, cultural knowledge, and contextual interpretation. Furthermore, computational systems struggle with tasks such as understanding emotions, humor, or sarcasm, as these require a profound comprehension of human experiences and social dynamics. These examples demonstrate that intelligence encompasses a broad spectrum of cognitive abilities and complexities that surpass the realm of mere computation. While computation is undoubtedly a powerful tool, it alone cannot capture the breadth and depth of human-like intelligence.

The Bias Assumption

Indeed, human-curated data presently drives the AI domain. The entrenchment of biases within this data contributes to a tragic outcome, as it seems impossible to rectify the situation. However, it is not inconceivable to create machine-biased data, whereby machines can reclassify, relearn, and train on data without the influence of human bias. This approach would supplant human bias with dynamic and adaptable machine bias. Machine biases, being statistical, would transcend specific domains, possess a broader contextuality, and can be trained to identify and mitigate amplification. Thus, this alternative avenue would avert the tragedy of the commons associated with human biases.

The Scientific Assumption

Science represents a crucial challenge for AI. It is futile to endeavor to construct intelligence based on incomplete scientific knowledge. Instead, incomplete science begets flawed tools that fail to generate intelligence but rather contribute to the proliferation of new information.

AI is an extension of statistics. However, it is imperative to contemplate whether AI researchers contemplate the incompleteness of statistics before employing recurrent neural network processes. Do they ponder the failure, formation, and transformation of statistical laws? For instance, when does normal statistical behavior deviate into non-normal behavior? Understanding the interplay and connection between respective statistical behaviors constitutes a neglected foundational aspect of active investment management.

AI is an extension of mathematics. Nevertheless, do AI researchers delve into the philosophical aspects of probability? The law of large numbers, established in the 17th century, illustrates that repeated experiments with two possible outcomes converge toward the true probability of success. This concept delves into randomness and probabilities. However, the full implications of this law remain elusive, primarily due to its association with random experiments, its involvement with stochastic convergence, and its statement concerning probabilities themselves.

The law of large numbers does not diminish the inherent uncertainty of the underlying experiment; it merely enables more accurate measurement of the associated probabilities through increased trials. Additionally, it does not alter future outcomes based on past results. For example, in a lottery, the chances of drawing a particular number remain unaffected by its past occurrences. The law of large numbers simply facilitates more precise probabilistic measurements in relative terms.

Jacob Bernoulli, the discoverer of this law, introduced a new perspective on probabilities. Philosophically speaking, the law of large numbers suggests that absolute certainty cannot be attained in our knowledge when dealing with uncertain and imperfect measurements. Bernoulli referred to this as "moral certainty," signifying that probabilities can be measured more accurately, but the number of experiments required to achieve high precision may become impractical. Thus, the frequentist viewpoint of probability exhibits limitations.

AI is not divorced from physics, which has yet to elucidate the reasons behind the failure of the second law of thermodynamics. Although we refer to this failure as "the demon," we have not yet uncovered its true nature. Instead, we fashion models of the world to fit into our AI framework. We discuss how physics poses a threat to our wealth while neglecting the insights of thinkers like Herbert Simon, who endeavored to unravel the intricacies of complexity.

One can delve further into psychology, biology, chemistry, and other disciplines to furnish additional evidence supporting the assertion that contemporary AI lacks the scientific rigor necessary to comprehend its operations. Thus, the semantic tool, generative AI, which has captivated the world, is merely a repackaging of existing concepts, creating new information rather than true intelligence.

Artificial General Intelligence

To achieve Artificial General Intelligence, a systematic, scientific, and replicable process capable of conceptualizing the mechanisms of nature is indispensable. Such an approach would not only minimize computational burdens in generating outcomes but would also transcend specific domains, thereby exhibiting generalization. Importantly, this approach would eschew the static nature of information steeped in content and instead adopt a contextual perspective. It is essential to acknowledge that researchers, as human beings, may harbor anchoring and confirmation biases. Regrettably, modern finance finds itself grappling with obsolescence, lionizing certain works while disregarding non-confirming or conflicting contributions, such as Boulding's 1966 work, which provides profound insights into the fluctuating relevance and irrelevance of information.

Conclusion

AI is a tool rendered useless without a robust scientific foundation. Without an understanding of the informational architecture underpinning AI, it merely succumbs to the principle of "garbage in, garbage out." Semantic tools are susceptible to pitfalls, generating nonsensical outputs as they lack common sense. ChatGPT, for example, represents an elevated tool for internet searches. The only AI that possesses the potential for self-enhancement is one capable of acquiring common sense, operating across domains, anticipating future outcomes, and being trained on an informational architecture. This is AGI we develop at AlphaBlock. Until more such initiatives become mainstream, the society remain confined to the hopelessness of the Information Age.