[Learn about machine learning from the Keras] — 8.evaluate process

Czxdas
3 min readSep 22, 2023

--

This article will observe the operation of using evaluate when evaluating the model.

from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype("float32") / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype("float32") / 255


from tensorflow.keras.models import Sequential
from tensorflow.keras import layers

model = Sequential([
layers.Dense(512, activation="relu")

])

model.compile(optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])

model.build(input_shape=(None, 784))

model.add(layers.Dense(10, activation="softmax"))
model.add(layers.Dense(20, activation="relu"))

model.fit(train_images, train_labels, epochs=5, batch_size=128)

test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f"test_acc: {test_acc}")

(1)
According to this example, you will first go to the keras.engine.training.Model.evaluate function body, and pass the passed parameters and signals into the keras.engine.data_adapter.DataHandler constructor for settings. This will also be done when doing model training. The parameters of the training function are wrapped into data_adapter.DataHandler entities, which means that model execution predictions can also be executed in batch settings.

(2)
The parameters can also set the callback object. If the callbacks parameter list passed in is not an entity list that inherits the keras.callbacks.CallbackList class, the callbacks parameter will be integrated into the keras.callbacks.CallbackList class. This class is also the Container of the callback. . During the integration process, just like during training, the final keras.callbacks.CallbackList entity will definitely contain the keras.callbacks.ProgbarLogger class entity and the keras.callbacks.History class entity.

(3)
Then call the keras.engine.training.Model._get_test_function_runner function to determine the test_function_runner to be used for each test step. Here, a method similar to closure is used to save the function in memory in advance, and this block of memory will be used when running batches.

So far it is very similar to model.predict.

The difference is that the reserved memory function will be packaged into the keras.engine.training.Model._TestFunction class, which also contains the sub-function run_step for further use.

(4)
The test subjects are as follows:

for (_,dataset_or_iterator,) in data_handler.enumerate_epochs(): # 指定1個epoch.
for step in data_handler.steps():
logs = test_function_runner.run_step(
dataset_or_iterator,
data_handler,
step,
self._pss_evaluation_shards,
)

(5)
The loop will mainly run test_function_runner.run_step, execute the keras.engine.training.Model.test_step function, and go directly to keras.engine.data_adapter.unpack_x_y_sample_weight to convert the incoming data into a tuple according to a list or other type collection. return.

The model call function initially executes keras.engine.sequential._build_graph_network_for_inferred_shape, sets the tensor dimension of the input, and continues to keras.engine.functional._run_internal_graph to find out the Layer nodes built by each model and execute the Layer in sequence. .call. In this example, the second-layer keras.layers.core.dense.call is executed sequentially. It mainly performs the inner product of the input input and its own Layer.kernel and returns it for use by the application main body. (After each layer is calculated, it will be saved to keras.engine.sequential.outputs)
Next, keras.engine.training.Model.compiled_loss will be executed, calling the LossesContainer object set before eras.engine.training.Model, calling keras.engine.compile_utils.LossesContainer.call, and then executing the set keras. The losses class object calls the function.

Because the set loss function name is “sparse_categorical_crossentropy”, it will enter keras.losses.sparse_categorical_crossentropy.

After calculating the loss function, the program executes keras.utils.losses_utils.compute_weighted_loss to do weighted loss, and then uses keras.utils.losses_utils.scale_loss_for_distribution to do Scales. Basically, it is doing weight related calculations.

After calculating the result, the returned LOG information is printed out through the callback function that processes LOG.

--

--

Czxdas
Czxdas

Written by Czxdas

Keep looking for Prajna wisdom

No responses yet