- GitHub - fchollet/keras-resources: Directory of tutorials and open-source code repositories for working with Keras, the Python deep learning library
Tensorflow GloVe embeddings
- TensorFlow 07: Word Embeddings (2) – Loading Pre-trained Vectors – Night Cáfe
- switch to split screen in IntelliJ (ideaVim) - Stack Overflow for windows splits –
Ctrl + wwfor quickly changing between splits.
- vim - Escape to IntelliJ IDEA shortcuts from IdeaVim - Stack Overflow some neat escaped IdeaVim shortcuts
- Monospaced Programming Fonts with Ligatures - Scott Hanselman
- I found Fira installed by default, with ligatures it looks quite nice.
/=assignment operator, which is like +=.
- X if Cond else Y:
- Can be also used in returns:
The correspondence between operator symbols and method names is as follows: x<y calls x.lt(y), x<=y calls x.le(y), x==y calls x.eq(y), x!=y calls x.ne(y), x>y calls x.gt(y), and x>=y calls x.ge(y).
- Deep generative models class are interesting notes for a ‘concise introductory course’ on deep generative models.
- Count Bayesie - A Probability Blog – TODO add to Newsbeuter! And probably read through. Yes, I should resurrect the link blog.
- Query = [kwee ree]
- Paradigm = [pa ruh daim] or American [peh ruh daim]
gt command to search tabs
:buffer works as a best-match tab filtering thingy, this in nicce with 30+ tabs when tab number doesn’t cut it anymore
Tensorflow Datasets easiest way to look inside with an iterator
Tensorflow Keras Callbacks
NER PRC and F-score (F1, metrics, precision, recall)
… uses only tags. As in, we don’t count the “not a name” tags at all during calculation of PRC. Otherwise the results would be too good. This is a NER-specific thing, though.
Current qutebrowser config.py
Intellij IDEA VIm plugin
It supports the following
:set commands: ideavim/set-commands.md at master · JetBrains/ideavim · GitHub. Especially
relativenumbers is nice.
ML/Tensorflow logits meaning
Logits are the inputs to the softmax function:
the vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function. The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.
ML attention animated
After all this time, I found this excellent animated example of attention and transformer and RNNs: Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) – Jay Alammar – Visualizing machine learning one concept at a time. From the same source, A Visual Intro to NumPy and Data Representation – Jay Alammar – Visualizing machine learning one concept at a time look very nice. I should resurrect my link wiki instead of pasting it all here.
Interesting python syntax
Interesting Python syntax I’ve seen in the Transformer Google repo:
Kinda relevant is code golf - Tips for golfing in Python - Code Golf Stack Exchange.
Python rich comparison operators
I should really create a vim thingy that automatically creates footnotes from a link. I can imagine it as a keystroke which generates a random footnote name and puts you on the last line of the file, with the footnote name prefilled, and in insert mode. Or another one that lets you specify a footnote name.
apt-get purge and zsh
zsh does its own wildcard stuff, and
apt-get purge nvidia* doesn’t work because of this.
apt-get purge nvidia\* does (or with ‘’s). Same story as with scp, I’m surprised I keep having issues with this.
Google has nice animations for this!
Most of this while I’m reading the “Attention is all you need” paper. The most important resources will be The Illustrated Transformer – Jay Alammar – Visualizing machine learning one concept at a time and 9.3. Transformer — Dive into Deep Learning 0.7 documentation.Read more...