User:Jeblad/in-work


 * Subpages for Jeblad

Notes about what is zippetti-zapetting around in my dumb head…

Flat learning
This is a technique for organizing neural nets that tries to avoid the unrolled deep structures in what is commonly called deep learning. The flat structures will increase the learning speed by avoiding the vanishing gradient problem, and then reduce the necessary number of training samples by normalizing their size and orientation, and encode their position, thereby reusing the samples.

(The name is a play on deep learning and on flat earth. No, it is not about fully connected layers.)

At least the following should be covered (some points could be split out to separate repos)


 * place encode the entities – this is really about how features can be related to physical locations, and thus how parts located in relation to each other.
 * normalize rotation of the entities – this is how features can be given a normalized rotation; what is up, what is down, and does it really matter.
 * normalize size of the entities – not quite sure how this should be done, neurons do it quite nice.
 * assembling high level entities – smaller features are aggregated into larger and this describes how to do it.

Repository is at

Neural reasoning
This is a play with mathematical symbols, and shows how a sequence of statements in a neural network can form an “argument”. Unfortunately it is not obvious how the statements can be found, but there might be a inference-like solution. That is given a rather opportunistic approach to reasoning.

Started with some fun relationships on how layers in neural networks interact and how similar it is to linked data, formalizing it as mathematical logic, and adding known structures from neuroscience in the mix. Suddenly it starts to make sense, but perhaps it is utterly nonsense.

Neural kernels
Description of alternate kernels for multiplication and summation, with focus on how bad can they be and still work properly. This is not really well-behaved analytical functions, it is surrogates that has sufficient properties to do the job. Also touching on how kernels loose information, and a kind of minimum criteria for a surrogate kernel.

Neural kernels for artificial neural networks are both multiplier for synapses, and a summation for the dendritic tree. For spiking neural networks, like biological neural networks the kernels might be slightly different.

The final outcome is that logic AND-OR gates are sufficient surrogates for multiplication and summation, which is sort of obvious. It has some consequences for Tsetlin machines which becomes nothing more than fully connected artificial neural networks with AND-OR as kernels.

Crowdsource constant
I wrote a piece just for fun about whether there exist a “crowdsource constant”. Looking at the size of the Wikipedia communities in several countries it seems like there could be some kind of mechanism that sets some kind of limit on the size of the communities.

With more data it could be interesting to retrace the work.

The old article is at