User:Graeme E. Smith/Phenomenally Implicit Memory: Scope and limitations

Phenomenally Implicit Memory Scope and Limitations

Graeme E. Smith, GreySmith Institute of Advanced Studies  http://en.wikiersity.org/wiki/Portal:GreySmith_Institute   http://en.wikiversity.org/wiki/User:GreySmith_Institute  grysmith@telus.net Scientists, after the discovery of information theory, thought to use this theory to discover how the brain worked. By the 1950's it was becoming obvious that the theory while it worked well for understanding computers, did nothing at all for understanding the brain. Neuroscientists rebelled against the tyrany of Artificial Intelligence theorists and began to use the computer as a simulation tool, rather than as a model for the mind. From this theoretical work began that was the heart of the phenomenalist movement that said there was something about the mind that couldn't be modeled directly on a computer. From this work came the idea of implicit memory, and the idea of phenomenal consciousness, as well as many simulations of so called Neural Networks. In this article I study the nature of the phenomenally implicit memory, it's scope and limitations.

In the 1950's a psychologist called George A. Miller attempted to apply Information Theory to short term memory, hoping to determine the size of the memory. Imagine his surprise when he found that there was no direct mapping, that the best he could get was a statistical measure suggesting not the size but the number of elements that could be stored of any size. Over time, Science moved away from trying to measure the brain using information theory to trying to simulate it. Scientists had already been studying the nerve anyway so this simply became the main thrust of science afterwords.The main problem was that mammalian cells were so complex as to preclude analysis. To simplify this job, one researcher by the name of Hebb, decided to try experimentation with simpler neurons that lent themselves to study because of their size. His work was with the Giant Squid Neurons. From that work came the first mathematical model of a simple neuron. By the 1970's scientists were familiar enough with the neural model as to begin making predictions about how different neurons acting in a group would act. One scientist, David Marr, went so far as to write over a 10 year period a number of models of neural functions based on probability mathematics. Of interest to this discussion is his A Theory on Cerebral Neocortex in which he discussed a role of the Cerebral Cortex as a "self-classifying Content Addressable Memory". Theoretically of course there are two types of memory addressing possible, Addressing by content and addressing by place-code. In computers the main difference is that addressing by content is more processor intensive. However, in computers the nature of memory is that anything that can be addressed by content, can also be addressed by address since every storage location is built with its own address. However theoretically it is possible to have a content addressable memory that is not addressable by place-code, if only because there is not a place-code addressing scheme attached to it. When Neural Networks were first worked with, one of the first things that they determined was that there was a lack of statistical linkage between the information going into the Neural Network and the Weights of the neurons, in which the information had to be stored. Quite simply the storage was a function that was different for each iteration of each architectural variation of the network. Because the statistical variation was so high, it was said that the storage was "Holographic" that the information was spread evenly across the network weights. However even this was not statistically determinable. Eventually scientists decided to call it Phenomenal because it was a phenomena of the whole network, but not evenly placed, just sensitive to timing and location of connections. One of the problems with natural neural networks was that the connections for each neural network were randomized in an individual manner, so that no two networks had the same connections. Up until the 1970's there was still some hope that the connections could be mapped, and at least some computational value found to them. However by the 1980's it was found that no such mapping was possible, in fact the closest we could get was a mapping of the sulcys and gyrus level features of the surface of the cerebral cortex, associated with anatomical differences in the organization of the brain that were linked to an arcane numbering system called Brady Numbers. It was probably a great discouragement to Marr, who had hoped to prove his contention that the connections between neurons were probabilistically connected to mathematical functions. It was Marrs concept that the cerebral neocortex operated as a whole as a content addressable memory, but the Brodmann numbering system, suggests that there are areas where there is a wide difference of organization in the structure of the neurons across the cerebral cortex as indicated by the sheer number of different Brodmann areas indicated. This suggests that the structure of the cerebral cortex is not as homogeneous as it would have to be, for Marr's theory to work perfectly to describe the function of the cerebral cortex, let alone that area of it he calls the cerebral neocortex. Natural Neural Networks are not as neatly situated as Marr's Codon Model would suggest, and his model only really deals with the first four layers of the cerebral cortex, not the full six found in human brains. However this first four layers are significantly enough, the heart of the Implicit form of Memory, so when I talk of Phenomenally Implicit Memory I am talking about the function of the first four layers of Cortex memory. Glossing over the differences for now, because each Brodmann area should be separately analyzed for function, once more is known about how the brain works, I will play lip service to Marr's model and assume that the effective mode of the memory is content addressability. If so, and I admit that it is arguable, then the phenomenally implicit memory is at least nominally also a content addressable memory, perhaps with some developmental features created by the presence of certain types of data, within the basic zones, that favor a particular arrangement of the neurons. Given this stipulation, it makes sense however that it is a phenomenally implicit memory as well, and therefore that no single element of memory can be isolated while it is being addressed in this state. To support this contention I often refer to Jerry A. Fodors The Mind Doesn't Work That Way!: the Scope and Limitations of Computational Psychology. In which he notes that it is the phenomenal nature of Neural Networks that precludes them being used directly as a demand type memory, where each element can be isolated, and demanded separately. In my Dual Mode Cortex article, I show clearly how this phenomenally implicit nature of the cerebral cortex is overcome using a dual porting arrangement, but note that it can only work, if after the porting is done a mapping function attempts to isolate, and make sense of the individual sub-elements available via the place-code portal. However in this article I am not interested in the second portal into the Dual Mode Cortex, so much as how that portal acts to limit and stretch the scope and limitations of the phenomenally implicit memory. Because of the arcane nature of this relationship where the place-code addressing merely ports to a naturally implicit memory system, the output of the system remains phenomenally implicit forming a data-field I call a quale, that represents all forms of memory linked by content to the original stimuli. It is important to realize that this Quale, defines a limit to intelligability of the implicit memory, even if it is addressed via the place-code interface. Given that the CHUNK formed when a particular functional cluster is translated to place-code addressability, can merely recall the Quale, or at least a quale derived from the original quale, as updated by implicit memory, that was originally indicated in the Qualar data-field. We have to be careful here to note that the same place-code address does not necessarily trigger the exact same quale, since it is the nature of implicit memory, that most of it's content is not to be found in explicitly addressable memory, and that there is still some uncertainty about location within the phenomenal network. This is despite the fact that the clump as is found after place-codes are formed, always points to the same network locations, and can be stored in working memory, let alone declarative memory. Part of the redescription of the quale into explicit memory is the segregation of the contents of the Quale by editing of the clump to form sub-chunk arrangements of place-codes, that result in mini-quales that are more Salient than the original quale. Without going into the techniques involved, we can however predict that any place-code prediction of the content of the memory, will result in the addition in its quale of voluntary elements from the implicit memory it is formed from. In this sense we can say that the explicit memory is flavored by the implicit memory without actually affecting the limits of the phenomenally implicit memory. This might seem counter intuitive to computer geeks who assume place-code supremacy, but it follows from the implicit supremacy of the Neural Network implementation. The Scope therefore of phenomenally implicit memory exceeds the limitations of implicit memory and encroaches on the explicitly addressable place-code based form of memory. It is partly because of this extension of scope, that the barriers between implicit and explicit memory seem blurred to the point where obviously explicit forms of memory are often labelled implicit, even though they could not exist in an implicit (Content Addressable) addressing scheme. For the purpose of this theory I therefore define implicit memory as that memory that is addressed implicitly, and explicit memory as that memory that is addressed explicitly despite the fact that there are implicit addendums to even the most explicitly addressed content at the Qualar level. Because the memory output is a Quale no matter how explicitly addressed, and a quale is defined as a data-field, attempts to narrow the data-field to individual elements are not successful, and in fact, digit span tests do not return just the digit, but also information about how to say the digit in the particular language the test is conferred in. Because of this, the number of syllables in the digit name, is as important to the cost of storage, as the content of the digit. Evidence shows that the rehearsal time, ie. how long it takes to present the chunk and retrieve the digit quale, is longer for a two syllable digit name that it is for a one digit syllable name. Resolution of the second syllable for a language that has two digit names actually affects the size of the memory in digits that can be stored in short term memory. Not because of the storage size, since both are quales, but because the search takes longer to resolve the second syllable of the name. Evidence abounds that the size of short-term memory is limited to the number of elements that can be rehearsed with in about a 2-3 second window. Since it is the time it takes to resolve each memory and complete its rehearsal, that affects the timing, the number of elements that can be rehearsed is distinctly limited, but not the size of the element being recovered. An interesting experiment has been found to affect the size of the memory, and that is the intentional clustering of the digits, into three and four digit elements. When done, it expands the memory for digits because the chunk sizes are larger but the number of chunks smaller. However it also increases the processing time since to get the digit out of the cluster, you need to access the quale for the cluster then the digit in cluster thus requiring two retrieval operations instead of just one. Phenomenally implicit content addressable memory areas are characterized by a highly inhibitive layer of neurons often of the mossy dendrite type, that act to select among themselves by competitive suppression of surrounding neurons, connected via direct connections or via parallel transport fibers that may contact numerous Pyramid type neurons. These may or may not be managed by a layer of stellite neurons that generally inhibit, excite, or shunt individual pyramidal neurons according to signals usually coming from outside the cortex itself. One interesting observation made by LaBerge and Kasevich in a recent article, is that the 4th layer is thicker in the primary perception areas of the brain. This suggests that there is a greater need to manage the data in the primary perception areas than in later areas. However this is assuming that the management layer remains the 4th layer whereas some confusion seems to exist because of the nature of the connection from the thalamus to V1 4th layer, which indicates that the mossy dendrites would be in the 4th layer in this case if that was the data input layer. Either the assumption that the data flows through the thalamus is incorrect, or the assumption that layer 4 manages the implicit memory is incorrect, in either case merely giving the area a Brodmann number will not help we need to look at the micro-anatomy to interpret the data. It is too bad that Brodmann numbers were not available when Ramon Y Cajal was doing his staining studies, samples of each Brodmann area stained with the correct stains would be very informative. There are two other areas in the brain that show nearly the same implicit structure as the Cerebral Neocortex. They include the hippocampus and the cerebellar cortex. In the hippocampus you have to look at the Dentate Gyrus as being part of the hippocampal structure to see it. The dentate gyrus holds the inhibitive mossy dendrites and the stellate neurons are probably to be found in CA4, CA3 and CA2 form the implicit content addressable memory elements, which together feed into CA1 that may play a role at integrating the contents of CA3 and CA2. The outputs of CA1 then feed into the Entorhinal cortex, which has been implicated in supplying mapping grids to CA3, and is thought to be the output of the hippocampal area. It is probable that the Subiculum acts as the addressing mechanism to allow explicit addressing of the various hippocampal areas. If so, it operates on at least 4 different areas within the hippocampal area and may represent a Meta-index allowing multi-dimensional addressing of data within the hippocampus areas. An interesting possibility has cropped up that output from the subiculum addressed areas might be BSOs or Beta Synchronized Objects. In the Cerebellar cortex, the main difference seems to be the substitution of Purkinje cells instead of normal pyramidal cells. It is thought that the context is analyzed by the layer of mossy dendrite cells, and that patterns of activity in the inferior olive are translated into activations of the Purkinje cells, which learn to associate the patterns of activity with the contexts, thus inducing patterns of activity according to context. Because this is an implicit type of memory, the output field or quale from the cerebellum contains a number of patterns or sequences of activity, and it is necessary to select from among them. This function is supplied by the Thalamus which associates each pattern or sequence of activity to a specific GSO or gamma synchronized oscillation, by which tag they can be selectively filtered from the general suppression. Essentially giving one pattern the green light and suppressing the others. The possibility exists that outputs from the cerebellum are actually ASOs or Alpha Synchronized Objects   If so, it would make sense that different parts of the Anterior Cingulate Cortex, might be used to resolve which of the GSOs BSOs and ASOs get selected. The brain waves generated would tend to favor either BSOs or ASOs depending on the type of processing going on, with BSOs prevalent when the data is being taken from the Declarative Memory, and ASOs prevalent when the activities are automated by the skill memory. In each case, the result of a GSO, a BSO, or an ASO would always be the pre-activation of the Cerebral Cortex making an explicit memory with implicit overtones. This means that we can make predictions about the nature of brain waves as detected by the EEG device, or MEG Device. So even though the mode of the hippocampal area output would be implicit in some cases, the effect would be to implement a form of explicit memory in the cerebral cortex, and while the same can be said for the skill memory, this would explain the nature of the cerebral loop structures found in the brain, where one loop goes through the hippocampus area, one loop goes through the Reticular Activation System, and one Loop goes through the Brainstem and cerebellum. An interesting aside is that since implicit memory is not associated with a sense of effort, but explicit memory is, Alpha waves which indicate the use of skill memory are not associated with a sense of effort, but explicit use of the Declarative Memory system is. This is why Alpha waves are thought to be associated with relaxation and Beta waves with attention. It is a little confusing because skill memory is related to muscular activity, and Beta waves are usually associated with thinking, yet the perception is that beta-waves are more energetic. In all cases it is the synchronized frequency that is the filter that determines which elements get selected by the Top-down attention via the ACC.

Conclusion
Implicit Memories have a distinctive Neurological structure, and are limited to Qualar output. However the Scope of Implicit Memories is not as limited because the architecture of the brain builds explicit memories on the implicit memory system, and so even explicitly accessed memories will have implicit content at the quale. Secondary Memory Loops in the brain such as the Skill Memory Loop and the Declarative Memory loop, have implicit elements, that are based on the explicit address of the Cerebral Neocortex Quale and context information, that allows them to process different aspects of the address information. The skill memory processes sequences of clumps, while the Declarative Memory processes a multi-dimensional meta-index, that allows us to find our clumps and thus the quales they represent within the Cerebral Cortex. The Thalamus or Subiculum probably are the bottom-up addressing mechanisms for each of these forms of memory allowing them to be addressed both by content and by place-code address. Brain Waves in the Alpha, Beta, and Gamma range are generated by different organs of the body, in order to allow the ACC to select among the quales generated in the Cerebral Cortex. Alpha waves probably are generated for the Cerebellum, Beta waves for the Meta-index, and Gamma Waves for the Thalamic Bottom-up Memory. This means that we can detect the source of a memory reference by the frequency tags associated with it. For example a functional cluster that resonates at both gamma and beta frequencies would probably be an explicit memory that was referenced via the subiculum and therefore via the Declarative Memory system. A functional cluster that resonates at both gamma and alpha frequencies would probably be an explicit memory that is part of a sequence automated by the skill memory. Functional clusters that resonate only at the gamma frequency are probably either implicit primary perception, or Explicit memories addressed at the working memory level.