Placeholders can be thought of as containers. They contain no values of their own until you fill them up with something. Just like physical containers, there are also restrictions on the shape and size of things you can put in them.

Placeholders are useful when you have tensors whose values you want to control throughout a session. For neural networks, you would want to use placeholders for the inputs and labels since you will have different values for each batch of the training set, validation set, test set, etc. You might also want to use them for passing hyper-parameter values such as the learning rate.

Declaring a placeholder is just a matter of placing a line like the following when creating a graph.

tf.placeholder(tf.float32, shape=[3,4])

Here we declared that we want a tensor that is 3 by 4 in size and composed of elements that are or can be converted to 32-bit floats.

A very important feature about placeholders is that they require feeding. We need to feed them some value everytime we start a new session. If we try to run through a graph, without feeding values to placeholders, then we get an error message such as:

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape dim { size: 3 } dim { size: 4 }

See for yourself, run the following code:

graph = tf.Graph() with graph.as_default(): tf_a = tf.placeholder(tf.float32, shape=(3, 4)) tf_b = tf_a * 2 with tf.Session(graph=graph) as session: # run, but do not feed `a` explicitly output = session.run(tf_b) print(output)

[OUTPUT] ... InvalidArgumentError ...: You must feed a value for placeholder tensor ...

In order to feed the placeholder ** tf_a** we can make use of the

`feed_dict`

argument in the `session.run()`

function. The argument takes a dictionary. In this dictionary, you specify which placeholders you need to pass values to, along with the value you want to pass to them.If we now feed ** tf_a** a value, we get it to run properly without errors.

graph = tf.Graph() with graph.as_default(): tf_a = tf.placeholder(tf.float32, shape=(3, 4)) tf_b = tf_a * 2 # Create an array to be fed to `a` input_array = np.ones((3,4)) with tf.Session(graph=graph) as session: # run, but do not feed `a` explicitly output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] [[ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.]]

Technically we can feed values using feed_dict on constant and variable tensors too. But it is best practice to use it on placeholders, that is their sole purpose. In fact, we saw in the last example that we are forced to feed it a value.

Placeholders will only allow you to feed them values that match some particular shape and size. When we declared the placeholder, we told it that we want a 3 by 4 tensor, composed of floats. If we try to feed it a 10 by 4 tensor instead we get an error message such as:

ValueError: Cannot feed value of shape (10, 4) for Tensor u'Placeholder:0', which has shape (Dimension(3), Dimension(4))

Run the following code to see for yourself:

# Create an array to be fed to `a` input_array = np.ones((10,4)) with tf.Session(graph=graph) as session: # feed `a` a value that is not 3 by 4 output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] ... ValueError: Cannot feed value of shape (10, 4) for Tensor 'Placeholder:0', which has shape '(3, 4)'

We may want the placeholder to be a little less fussy about what we feed it. For example, when we are creating a neural network, the inputs may have numbers of samples. Our training, validation, and test sets usually won't have the same number of samples, and when we use it in production, we will also want to make predictions on different amounts of data, sometimes we may only want to make a prediction on a single sample. But, in all cases there are still similarities in the data, We will want all the data to have at least one of the dimensions to be the same size.

With Placeholders, we can constrain the size along certain dimensions, and be completely flexible along other dimensions. In order to do so, we use the `shape`

argument and place a `None`

value for the dimensions we want to be flexible with, and an integer value for the dimensions we want to constrain.

In the example below, we make the first axis (number of rows) flexible and constrain the second axis (number of columns) to 4. We feed it a 5 by 4 tensor:

# Create a 5 by 4 array to be fed to `a` input_array = np.ones((5,4)) graph = tf.Graph() with graph.as_default(): # Create a placeholder that is flaxible along the first # dimension and fixed along the second dimension. tf_a = tf.placeholder(tf.float32, shape=(None, 4)) tf_b = tf_a * 2 with tf.Session(graph=graph) as session: output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] [[ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.]]

Now we test it with a 2 by 4 tensor.

# Create a 2 by 4 array to be fed to `a` input_array = np.ones((2,4)) with tf.Session(graph=graph) as session: output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] [[ 2. 2. 2. 2.] [ 2. 2. 2. 2.]]

But if we test it on a tensor whose second axis is not 4, we get an error message like this:

ValueError: Cannot feed value of shape (2, 5) for Tensor 'Placeholder:0', which has shape '(?, 4)'

Go ahead! try running the code below to see for yourself.

# Create an array to feed to `a` that has a different # number of columns than declared in the placeholder input_array = np.ones((2,5)) with tf.Session(graph=graph) as session: output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] ... ValueError: Cannot feed value of shape (2, 5) for Tensor 'Placeholder:0', which has shape '(?, 4)'

Placeholders also place restrictions on the datatype that we feed it. The first argument to the `tf.placeholder()`

function is the data type.

a = tf.placeholder(tf.float32, shape=(None, 4))

In the example we have been using, we specified that it should be a `tf.float32`

(or at least a datatype that can be typecast to a `tf.float32`

. Integers, floats, and strings containing only numbers can all be typecast into `tf.float32`

, so we can feed it arrays containing all those values, but if we feed it something different like an array of non-numeric strings then it will throw an error message like the following:

ValueError: could not convert string to float: alice

Go ahead! Run the following code to see for yourself!

# Create an array to feed to `a` that has a different # datatype than declared in the placeholder input_array = np.array([["alice","bob","claire","danny"], ["ellie", "fred", "gina", "harry"]]) with tf.Session(graph=graph) as session: output = session.run(tf_b, feed_dict={tf_a: input_array}) print(output)

[OUTPUT] ValueError: could not convert string to float: 'alice'

In the previous examples, we specified which placeholder we wanted to pass values to by the python variable object directly:

tf_a = tf.placeholder(...) ... feed_dict={tf_a: input_array}

But we can also assign a string name to the placeholder, and feed values to the placeholder by specifying the name of the placeholder we want to pass the value to:

tf_a = tf.placeholder(..., name="myplaceholder") ... feed_dict={"myplaceholder:0": input_array}

Note that we must place a ":0" at the end of the name when identifying the tensor values.

A more complete example you can try running is below:

# Create a 5 by 4 array to be fed to `a` input_array = np.ones((5,4)) graph = tf.Graph() with graph.as_default(): # Create a placeholder with a name tf_a = tf.placeholder(tf.float32, shape=(None, 4), name="my_placeholder") tf_b = tf_a * 2 with tf.Session(graph=graph) as session: output = session.run(tf_b, feed_dict={"my_placeholder:0": input_array}) print(output)

[OUTPUT] [[ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.] [ 2. 2. 2. 2.]]