distributed tensorflow graph computation https://medium.com/searchink-eng/keras-horovod-distributed-deep-learning-on-steroids-94666e16673d To put things into perspective, we were running an Inception3 architecture with a sample of 18 thousand documents on a 1 * 12GB Tesla K80 GPU. Each epoch took about 30 minutes. With Horovod and an upgraded instance with 4 * 12GB Tesla K80 GPU, reduced each epoch to about 5–6 minutes. Books
TF Learn simple example
What’s a tensor?
Tensorboard
`import tensorflow as tf ` `a = tf.add(3, 5) ` `print a ` `>> Tensor("Add:0", shape=(), dtype=int32) # (Not 8)` How to get the value of a?
subgraph
Run part of a graph on a specific GPU or CPU (for parallel computation)
Multiple graphs require multiple sessions, each will try to use all available
resources by default
● Can't pass data between them without passing them through
python/numpy, which doesn't work in distributed
● It’s better to have disconnected subgraphs within one graph tensorboard
constant types
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) operations Graph's definition is called Protobuff stands for protocol buffer, “Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler.”
Variables
Initializ all variables at onceinit = tf.global_variables_initializer()with tf.Session() as sess:tf.run(init) # Note that you use tf.run() to run the initializer, not fetching any value.Initialize only a subset of variables with a list of variables to initializeinit_ab = tf.variables_initializer([a, b], name="init_ab")with tf.Session() as sess:tf.run(init_ab)
# create variable W as 784 x 10 tensor, filled with zeros
W = tf.Variable(tf.truncated_normal([700, 10]))
W = tf.Variable(10) declare a variable that depends on other variables
Control Dependencies
placeholder
# create Operations, Tensors, etc (using the default graph) a = tf.add(2, 5) b = tf.mul(a, 3)
# start up a `Session` using the default graph sess = tf.Session()
# define a dictionary that says to replace the value of `a` with 15
replace_dict = {a: 15}
# Run the session, passing in `replace_dict` as the value to `feed_dict` sess.run(b, feed_dict=replace_dict) # returns 45
feed_dict can be extremely useful to test your model. When you have a large graph and just
want to test out certain parts, you can provide dummy values so TensorFlow won’t waste time
doing unnecessary computations.The trap of lazy loading
We also add some extra evidence called a bias. Basically, we want to be able to say that some things are more likely independent of the input. But it's often more helpful to think of softmax the first way: exponentiating its inputs and then normalizing them. The exponentiation means that one unit more evidence increases the weight given to any hypothesis multiplicatively. And conversely, having one less unit of evidence means that a hypothesis gets a fraction of its earlier weight. Softmax then normalizes these weights, so that they add up
to one, forming a valid probability distribution.
In order to train our model, we need to define what it means for the model to be good. Well, actually, in machine learning we typically define what it means for a model to be bad, called the cost or loss, and then try to minimize how bad it is. But the two are equivalent. Where y is our predicted probability distribution, and y′ is the true distribution (the one-hot vector we'll input). In some rough sense, the cross-entropy is measuring how inefficient our predictions are for describing the truth. Going into more detail about cross-entropy is beyond the scope of this tutorial, but it's well worth understanding. What TensorFlow actually does here, behind the scenes, is it adds new operations to your graph which implement backpropagation and gradient descent. Then it gives you back a single operation which, when run, will do a step of gradient descent training, slightly tweaking your variables to reduce the cost. |

Artificial Intelligence + NLP + deep learning > AI > Machine Learning > Neural Networks > Deep Learning > python >