English / ML - ablation study/analysis

(20) In the context of deep learning, what is an ablation study? - Quora, may God forgive me for a Quora link -

An ablation study typically refers to removing some “feature” of the model or algorithm, and seeing how that affects performance.

and

An ablation study is where you systematically remove parts of the input to see which parts of the input are relevant to the networks output.

First seen in page 3 of Deep contextualized word representations

Knowledge wikis

Gabor Melli’s Research Knowledge Base looks to be basically what I’ve tried to do. His layout for the posts is also interesting, for example see here: Personal Blog - GM-RKB It has a Context, Example(s), Counter-Example(s), and References. But I can't find any meta-level page describing this Gabor Melli’s Knowledge Base (GM-RKB) - GM-RKB. Usually the more interesting topics contain other stuff, such as Meaningless Universe Theory - GM-RKB the References contain actual quotes.

I really really really want to resurrect my own wiki.

Tensorflow v1

    x = e.__call__(myinput)

    sess = tf.Session()
    sess.run(tf.global_variables_initializer())
    result = sess.run(x)

The tf.global_variables_initializer() gets rid of “Attempting to use uninitialized value” errors.

CNN visualization (ML)

How to visualize convolutional features in 40 lines of code has very cool pictures of intermediate layers of a CNN, along with pictures that seem to fit to it. This is realyl really cool.

qutebrowser bindings

qutebrowser/.config/qutebrowser/autoconfig.yml · 39516940c80b70bab059e563a129709882f4a41e · Jay Kamat / dotfiles · GitLab has very interesting bindings, with interesting commands like fake-key and stuff, along with per-websites(?) bindings; javascript whitelisting.

CNN tutorial

Again I come back to this nice resource: 6.4. Multiple Input and Output Channels — Dive into Deep Learning 0.7.1 documentation

It has a very nice explanation of CNNs and in/out channels and stuff.

Especially output channels:

Regardless of the number of input channels, so far we always ended up with one output channel. However, as we discussed earlier, it turns out to be essential to have multiple channels at each layer. In the most popular neural network architectures, we actually increase the channel dimension as we go higher up in the neural network, typically downsampling to trade off spatial resolution for greater channel depth. Intuitively, you could think of each channel as responding to some different set of features\o(6.4. Multiple Input and Output Channels — Dive into Deep Learning 0.7.1 documentation)

Duckduckgo I’m feeling lucky

This is awesome! Works with a backslash: \tf.nn.maxpool