2 min read

Language dynamism called meta-circular evaluation is a programming form that allows for self-interpretation where its input data represents the very same language. Above and extending the self-interpretation logic is an application mechanism that revises the ultimate behavior (or return value) of its own instructional composition so it can maintain/reuse the same appearance, familiarity or user interface but allows for enhancement and evolution without restarting the program over from the beginning and leading to delay.

OS “dynamic-link libraries” and polymorphic (virtual and closure) methods, LISP, JavaScript, PHP, Python and even Java supply this dynamism. Consider rewriting a function to only continue calculating if it meets a certain condition. Now, for conditioning many functions (or even every individual instruction), one might rewrite the language interpreter to do this automatically (or maybe as an effect of another condition, like: “only while it has not yet been 30 minutes”). Or, consider a function that remembers its result, or a database index or search engine.


We can rely on memory capacity to speed up computation: by making the result of the next operation rely on the result of the previous, we can pre-determine logical results. The previous operation might say, “modify application in-place, using the new input, to more quickly respond to the next, or further, input.”

Training AI is like compiling source code, but in the human brain, fine-tuning, or adjusting previously-calculated neuronal weights happens dynamically, in real-time. A conversation usually continues from where it ended last, leveraging holistic comprehension. If you could model the AI to integrate the “endpoint of the conversation” into its memory as a result of “when the conversation started and was met with a certain input,” it might improve relevancy and conscious presence.