Language dynamism called meta-circular evaluation is a programming form that allows for self-interpretation where its input data represents the very same language. Above and extending the self-interpretation logic is an application mechanism that revises the ultimate behavior (or return value) of its own instructional composition so it can maintain/reuse the same appearance, familiarity or user interface but allows for enhancement and evolution without restarting the program over from the beginning and leading to delay.
We can rely on memory capacity to speed up computation: by making the result of the next operation rely on the result of the previous, we can pre-determine logical results. The previous operation might say, “modify application in-place, using the new input, to more quickly respond to the next, or further, input.”
Training AI is like compiling source code, but in the human brain, fine-tuning, or adjusting previously-calculated neuronal weights happens dynamically, in real-time. A conversation usually continues from where it ended last, leveraging holistic comprehension. If you could model the AI to integrate the “endpoint of the conversation” into its memory as a result of “when the conversation started and was met with a certain input,” it might improve relevancy and conscious presence.