I have thought about machines that assimilate knowledge for more than a decade. For me, assimilation was always an achievable present, not a future imagination. The 2002 film Time Machine based on H.G. Wells' novel left an indelible mark on me. The hero of the film, an inventor, Alexander Hartdegen communicates with a humanoid holographic artificial intelligence called Vox through a screen. Vox possessed vast knowledge of history and acts as a repository of information. Vox serves as a source of historical information for Hartdegen as he navigates through different time periods. Vox is in some ways an assimilator. He understands, comprehends, perceives, and is not a retriever of information or a navigator like ChatGPT. A navigator does not understand the data like a human. So how do we transform a ChatGPT into Vox, the assimilator?

Navigators vs. Assimilators

Before we dive into the transformation, it may be worth classifying the two stages of learning. It is easy for a child to relate to a book, even read it, but it's the comprehension that is harder. Navigation has been present in various forms of technology for a long time. From search engines that retrieve information from vast databases to recommendation algorithms that suggest relevant products or content based on user preferences, navigation has been a critical function of many information processing systems. These systems are designed to effectively process and retrieve data, but they cannot often dynamically adapt and learn from new information.

Assimilation, on the other hand, goes beyond navigation by enabling machines to incorporate new information into their existing knowledge or cognitive frameworks. Assimilation allows machines to adapt and learn from new data, update their knowledge, and make more informed decisions. The process of retention and integration is complex and closely mirrors how humans learn and update their knowledge, based on new experiences, making it a significant step towards achieving real intelligent systems. Assimilation is the foundation for intelligent systems because it allows machines to go beyond static data processing and move toward dynamic and adaptive learning. 

Constraints for Navigators!

Navigator systems rely on patterns learned from training data, and their understanding of context may be limited. They may not always fully comprehend the nuances, complexities, or subtleties of a given situation, leading to potentially inaccurate or incomplete responses.

Navigator systems like ChatGPT cannot learn from feedback or correct mistakes in real-time. They generate responses based on patterns learned from training data and do not have a feedback loop to improve their responses based on user interactions or feedback.

Navigator systems do not possess true reasoning or critical thinking capabilities. They rely on pattern recognition and statistical associations in their training data, which may not always result in logical or nuanced responses to complex queries.

Constraints for Assimilators

Assimilation systems are typically built on specific algorithms or models that may have limitations in their ability to effectively assimilate and integrate new data. If the algorithms used are not sophisticated enough or cannot handle complex data relationships, the assimilation process may be constrained.

Assimilators rely on contextual understanding to effectively integrate new information with existing knowledge. If the system cannot understand and interpret contextual cues, such as nuances in language or domain-specific knowledge, it may struggle to accurately assimilate new information.

These systems can be computationally expensive, requiring significant computational resources and processing power. If the system lacks the necessary computational capacity, it may be constrained in its ability to assimilate large amounts of data in real-time or near real-time.

Contextual Learning

Computational limitations in assimilation systems may not be directly overcome by contextual understanding alone, as they are related to the available computational resources and processing power. However, contextual understanding can help optimize the assimilation process and make it more efficient by enabling the system to focus on relevant and meaningful data.

The concept of contextual learning is explained using the analogy of a librarian organizing a collection of books. The librarian has limited space on the shelves and needs to be strategic about which books to keep and which ones to discard. Similarly, a computational assimilation system can prioritize relevant and meaningful data while filtering out irrelevant or redundant information, making the assimilation process more efficient. 

Contextual understanding can help in interpreting and integrating new data into the existing knowledge base more accurately, by enabling the system to grasp the nuances of language, domain-specific jargon, or cultural references that may impact the assimilation process. In the analogy, the assimilation system is like the librarian and the books are like the data being assimilated. The librarian's contextual understanding allows for efficient filtering and prioritizing of books based on their relevance, just as contextual understanding can help optimize the assimilation process and make it more efficient by enabling the system to focus on relevant and meaningful data.

Contextual Abstraction

Abstraction is another form of contextual learning. Abstraction involves the ability to generalize, extract key concepts or patterns, and create higher-level representations of information, which can facilitate the assimilation of new information by capturing its essence in a more compact and meaningful form.

Abstraction is another form of contextual learning that involves creating higher-level representations of information by generalizing, extracting key concepts or patterns, and capturing the essence of the information in a more compact and meaningful form. The process allows individuals to move beyond specific details and identify broader principles or concepts that can be applied to a wider range of situations.

Using the analogy of a librarian organizing a collection of books, abstraction can be thought of as creating a catalog or index of the collection. The librarian identifies common themes and topics that run throughout the collection and groups the books by author, genre, or subject matter, enabling the easier organization and navigation of the shelves. Similarly, abstraction in contextual learning enables individuals to identify patterns and relationships across different sources of information, create mental models or frameworks, and use them for reasoning, problem-solving, and decision-making. These higher-level representations can facilitate the assimilation of new information by providing a context for interpreting and integrating it into existing knowledge.

Human Learning and Assimilation

The human brain has computational limitations in terms of processing speed, capacity, and energy consumption. However, humans can assimilate new information effectively due to their contextual-rich grasp, which involves the ability to understand and interpret information in the context of their existing knowledge, experiences, and cognitive processes.

Human assimilation is often facilitated by the brain's ability to process information holistically and contextually. Humans can draw upon their existing knowledge and experiences to interpret new information, make connections, and integrate it into their existing mental models. This contextual-rich grasp allows humans to assimilate new information in a more meaningful and efficient way, as they can relate it to their existing knowledge and make sense of it in a broader context.

Additionally, humans also possess higher-order cognitive abilities, such as reasoning, critical thinking, and abstraction, which further facilitate the assimilation process. These cognitive processes allow humans to analyze, evaluate, and integrate new information into their existing knowledge base, even when faced with complex or ambiguous situations.

Contextual Informational Mechanism

To make machines smarter than humans, machines have to do more than the conventional brain. They have to think of mechanisms because contextual abstraction has limits. Mechanisms on the other hand can be perpetual, unique, limit-defining, and limitless in scope. Contextual Informational Mechanisms might seem imaginary but they have been there since Shanon wrote his paper on information theory. In the paper, he discussed the structure of information as a mechanism with three main components: a source of information, a transmitter that encodes the information into a signal, and a receiver that decodes the signal back into information. Information theory has its foundations in Johnson–Nyquist noise, which is also mechanistic, a convergence-divergence mechanism built from electric and magnetic fields, a perpetual motion of order and disorder. 

This mechanism opens up possibilities to understand new contextual boundaries for information which creates new contexts and destroys older contexts, a complexity beyond human definition. By incorporating a dynamic mechanistic context, machines could potentially adapt to changing information, update their knowledge, and refine their understanding of new information. This could allow them to assimilate information more effectively by capturing the evolving nature of information and making adjustments accordingly.

Mathematical History of Language and Intelligent Agents

The more we go into the future, the more we revisit history. The answer to the assimilation puzzle can also be found in the mathematical history of language. Kingsley Zipf argued that language development was about a vocabulary balance that was driven by two opposing forces, the forces of reversion (unification) and diversion (diversification), a mechanism. The parent spoke at the level of the child, building vocabulary, word by word. While the child learned fast on the path of least effort, assimilating the language, its context, and the abstraction, driven by the mechanism of listener and speaker. 

Machines can learn to assimilate like a child if we use the right training contextual mechanisms. This machine won’t have to start from the words to the books, to the library. We can give it a song and it will learn to hmm it back and even give us a new version, better than Vox.