Skip to main content

Log everything with Layer

Open in Layer Open in Colab Layer Examples Github

Layer allows you log your projects metadata such as model metrics, parameters, etc. In this notebook, we'll look through how you can log various items in your project with Layer. Let's start by installing Layer.

!pip install layer -U

For this illustration, we'll use the Fashion MNIST dataset to build a simple CNN model.

import layer
import tensorflow as tf
import os
import numpy as np
import pandas as pd

Load Fashion Mnist train and test datasets from Layer

mnist_train = layer.get_dataset('layer/fashion_mnist/datasets/fashion_mnist_train').to_pandas()
mnist_test = layer.get_dataset('layer/fashion_mnist/datasets/fashion_mnist_test').to_pandas()
mnist_train.head()
mnist_test.head()

Convert the images to a np.array for TF

def images_to_np_array(image_column):
return np.array([np.array(im.getdata()).reshape((im.size[1], im.size[0])) for im in image_column])

Adding Layer to your project is as simple as wrapping your functions with Layer decorators. Let's import the ones we'll be using.

from layer.decorators import model, fabric,pip_requirements, resources

Next, let's authenticate your Layer account. Click the generated link to log in or sing up. Copy and paste the code generated on the textbox on this notebook and press enter.

layer.login()

Initialize a Layer project

Layer stores all your project's metadata in a project. You create multiple projects for free. Each project can hold multiple datasets and models. Projects are created using the init function.

layer.init("logging")

Logging with Layer

Some of the items you can log with Layer include:

  • Project description.
  • Markdown.
  • Model parameters.
  • Model training and evaluation metrics.
  • Pandas DataFrame.
  • Matplotlib charts.

Logging in Layer is done inside a function wrapped with the @model and the @dataset decorators. The log function is used to log everything in Layer. This function expects a dictionary.

In the snippet below we use Layer to log the model parameters, description, validation metrics sample prediction DataFrame.

 layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
# Layer supports Markdown.

You can use it to add **some descriptions** in your model development.

"""
layer.log({"Description":layer.Markdown(markdown)})
parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)

df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
# Log Pandas DataFrame
layer.log({"Sample predictions":df.sample(100)})
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
layer.log(test_metrics)
layer.log({"Accuracy plot": plt.gcf()})

@pip_requirements(packages=["tensorflow==2.8.0","keras==2.8.0"])
@fabric("f-gpu-small")
@model("mnist")
def train():
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
train_images = images_to_np_array(mnist_train.images)
test_images = images_to_np_array(mnist_test.images)
train_labels = mnist_train.labels
test_labels = mnist_test.labels
layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
# Layer supports Markdown.

You can use it to add **some descriptions** in your model development.

"""
layer.log({"Description":layer.Markdown(markdown)})
parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)
# Setup the layers
model = keras.Sequential(
[
keras.Input(shape=(parameters["shape"], parameters["shape"], 1)),
layers.Conv2D(32, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Conv2D(64, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Flatten(),
layers.Dropout(parameters["dropout"]),
layers.Dense(parameters["classes"], activation="softmax"),
]
)
# Compile the model
model.compile(optimizer=parameters["optimizer"],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train it!
history = model.fit(x=train_images, y=train_labels,validation_data=(test_images,test_labels), epochs=parameters["epochs"], verbose=2)
metrics_df = pd.DataFrame(history.history)
layer.log({"Metrics DF":metrics_df })
metrics_df[["loss","val_loss"]].plot()
layer.log({"Loss plot": plt.gcf()})
metrics_df[["accuracy","val_accuracy"]].plot()
layer.log({"Accuracy plot": plt.gcf()})
# And finally evaluate the accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
predictions = model.predict(test_images)
df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
# Log Pandas DataFrame
layer.log({"Sample predictions":df.sample(100)})
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
layer.log(test_metrics)
return model
layer.run([train])

Logging with Layer

Logging with Layer

Log with steps

You can also use Layer to log data that involves steps. For example, in the example below we log some sample MNIST images.

mnist_train_sample = mnist_train[["images"]].head(10)
for i in range(10):
layer.log({f"Image": mnist_train_sample["images"][i]}, step=i)
@pip_requirements(packages=["tensorflow==2.8.0","keras==2.8.0"])
@fabric("f-gpu-small")
@model("mnist")
def train():
from tensorflow import keras
from tensorflow.keras import layers

train_images = images_to_np_array(mnist_train.images)
test_images = images_to_np_array(mnist_test.images)
train_labels = mnist_train.labels
test_labels = mnist_test.labels
layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
# Layer supports Markdown.

You can use it to add **some descriptions** in your model development.


In this run we'll add logging images in steps. Let's log some sample images from
from the MNIST dataset.

"""
layer.log({"Description":layer.Markdown(markdown)})
mnist_train_sample = mnist_train[["images"]].head(10)
for i in range(10):
layer.log({f"Image": mnist_train_sample["images"][i]}, step=i)

parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)
# Setup the layers
model = keras.Sequential(
[
keras.Input(shape=(parameters["shape"], parameters["shape"], 1)),
layers.Conv2D(32, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Conv2D(64, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Flatten(),
layers.Dropout(parameters["dropout"]),
layers.Dense(parameters["classes"], activation="softmax"),
]
)
# Compile the model
model.compile(optimizer=parameters["optimizer"],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train it!
model.fit(x=train_images, y=train_labels,validation_data=(test_images,test_labels), epochs=parameters["epochs"], verbose=2)
# And finally evaluate the accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
predictions = model.predict(test_images)
df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
# Log Pandas DataFrame
layer.log({"Sample predictions":df.sample(100)})
layer.log(test_metrics)
return model
layer.run([train])

Notice how can can use the slider to view images at every step.

Log in steps

Log using callbacks

A better use case for logging with steps is logging the performance of the model per epoch. For instance, you can use a Keras callback to log the training and validation metrics per epoch. Layer provide a built-in callback for this purpose.

@fabric("f-gpu-small")
@pip_requirements(packages=["tensorflow==2.8.0","keras==2.8.0"])
@model("mnist")
def train():
from tensorflow import keras
from tensorflow.keras import layers

train_images = images_to_np_array(mnist_train.images)
test_images = images_to_np_array(mnist_test.images)
train_labels = mnist_train.labels
test_labels = mnist_test.labels
layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
# Layer supports Markdown.

You can use it to add **some descriptions** in your model development.


In this run we'll add logging images in steps. Let's log some sample images from
from the MNIST dataset.

"""
layer.log({"Description":layer.Markdown(markdown)})
mnist_train_sample = mnist_train[["images"]].head(10)
for i in range(10):
layer.log({f"Image": mnist_train_sample["images"][i]}, step=i)

parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)
# Setup the layers
model = keras.Sequential(
[
keras.Input(shape=(parameters["shape"], parameters["shape"], 1)),
layers.Conv2D(32, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Conv2D(64, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Flatten(),
layers.Dropout(parameters["dropout"]),
layers.Dense(parameters["classes"], activation="softmax"),
]
)
# Compile the model
model.compile(optimizer=parameters["optimizer"],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train it!
model.fit(x=train_images, y=train_labels, validation_data=(test_images,test_labels),epochs=parameters["epochs"], verbose=0,callbacks=[layer.KerasCallback()])
# And finally evaluate the accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels,verbose=0)
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
predictions = model.predict(test_images)
df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
# Log Pandas DataFrame
layer.log({"Sample predictions":df.sample(100)})
layer.log(test_metrics)
return model
layer.run([train])

Using callbacks

Log interactive apps

You can log interactive Gradio and Streamlit applications with Layer. Applications are logged by passing the Hugging Face space link to the log function as Markdown. The syntax looks like this:

layer.log({"demo":layer.Markdown("<iframe width='100%', height='522px' src='https://hf.space/embed/mecevit/english-to-sql/+'></iframe>")})
@fabric("f-gpu-small")
@pip_requirements(packages=["tensorflow==2.8.0","keras==2.8.0"])
@model("mnist")
def train():
from tensorflow import keras
from tensorflow.keras import layers

train_images = images_to_np_array(mnist_train.images)
test_images = images_to_np_array(mnist_test.images)
train_labels = mnist_train.labels
test_labels = mnist_test.labels
layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
# Layer supports Markdown.

You can use it to add **some descriptions** in your model development.


In this run we'll add logging images in steps. Let's log some sample images from
from the MNIST dataset.

"""
layer.log({"Description":layer.Markdown(markdown)})
mnist_train_sample = mnist_train[["images"]].head(10)
for i in range(10):
layer.log({f"Image": mnist_train_sample["images"][i]}, step=i)

parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)
# Setup the layers
model = keras.Sequential(
[
keras.Input(shape=(parameters["shape"], parameters["shape"], 1)),
layers.Conv2D(32, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Conv2D(64, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Flatten(),
layers.Dropout(parameters["dropout"]),
layers.Dense(parameters["classes"], activation="softmax"),
]
)
# Compile the model
model.compile(optimizer=parameters["optimizer"],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train it!
model.fit(x=train_images, y=train_labels,validation_data=(test_images,test_labels), verbose=0,epochs=parameters["epochs"])
# And finally evaluate the accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels,verbose=0)
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
predictions = model.predict(test_images)
df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
# Log Pandas DataFrame
layer.log({"Sample predictions":df.sample(100)})
layer.log(test_metrics)
layer.log({"demo":layer.Markdown("<iframe width='100%', height='522px' src='https://hf.space/embed/mecevit/english-to-sql/+'></iframe>")})
return model
layer.run([train])

Gradio Demo

Log videos and GIFs

Layer supports logging of videos and GIFs. This can come in handy to show short demos. In the example below, we use the @resources decorator to upload a video and a GIF to Layer then use layer.log to log them.

layer.log({"GIF": Image.open("video.gif")})
video_path = Path("video.mp4")
@fabric("f-gpu-small")
@pip_requirements(packages=["tensorflow==2.8.0","keras==2.8.0"])
@resources("video.gif", "video.mp4")
@model("mnist")
def train():
from tensorflow import keras
from tensorflow.keras import layers
from pathlib import Path

train_images = images_to_np_array(mnist_train.images)
test_images = images_to_np_array(mnist_test.images)
train_labels = mnist_train.labels
test_labels = mnist_test.labels
layer.log({"Description": "TensorFlow MNIST project"})
markdown = """
## Metadata about experiments and model training runs
Layer allows you to store metadata about experiments and model training runs. This information includes but is not limited to:

- **Dataset version** used for model training.
- **Model hyperparameters** yielding the best results.
- **Training loss and metrics** to understand if the model is learning.
- **Testing metrics and loss** to quickly see if the model is overfitting.
- **Model predictions** to get a rough idea of its performance.
- **Hardware metrics** to inform you of GPU and CPU utilization.
- **Performance charts** such as accuracy and loss plots, confusion matrix, Precision-Recall Curve, ROC curve, etc.
- **Package versions** to ensure the project runs without failing.
- **Model training logs** to make it easy to debug the project.
- **Information that is specific** to the domain of your problem.

Layer stores information regarding the trained models too. This information includes:

- The person who trained the model.
- Model version.
- The infrastructure used to train the model could be local or cloud.
- Machine learning packages used to train the model, for example, TensorFlow, PyTorch, Scikit-learn, etc.
- Model description.
- When the model was trained.
- How long it took to train the model.

Layer stores the resulting model and makes it accessible for immediate use. You can fetch the model and use it for predictions right away.
"""
layer.log({"Description":layer.Markdown(markdown)})
mnist_train_sample = mnist_train[["images"]].head(10)
for i in range(10):
layer.log({f"Image": mnist_train_sample["images"][i]}, step=i)

parameters = {"shape":28, "activation": "relu", "classes": 10, "units":12, "optimizer":"adam", "epochs":10,"kernel_size":3,"pool_size":2, "dropout":0.5}
layer.log(parameters)
# Setup the layers
model = keras.Sequential(
[
keras.Input(shape=(parameters["shape"], parameters["shape"], 1)),
layers.Conv2D(32, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Conv2D(64, kernel_size=(parameters["kernel_size"], parameters["kernel_size"]), activation=parameters["activation"]),
layers.MaxPooling2D(pool_size=(parameters["pool_size"], parameters["pool_size"])),
layers.Flatten(),
layers.Dropout(parameters["dropout"]),
layers.Dense(parameters["classes"], activation="softmax"),
]
)
# Compile the model
model.compile(optimizer=parameters["optimizer"],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train it!
model.fit(x=train_images, y=train_labels,validation_data=(test_images,test_labels), verbose=0,epochs=parameters["epochs"])
# And finally evaluate the accuracy
test_loss, test_acc = model.evaluate(test_images, test_labels,verbose=0)
test_metrics = {"Test loss": test_loss,"Test accuracy":test_acc }
predictions = model.predict(test_images)
df = pd.DataFrame(predictions, columns=["0","1","2","3","4","5","6","7","8","9"])
layer.log({"Sample predictions":df.sample(100)})
layer.log(test_metrics)
layer.log({"GIF": Path(f"{os.getcwd()}/video.gif")})
video_path = Path("video.mp4")
layer.log({"Video": video_path})
return model
layer.run([train])

Log video images

Where to go from here

To learn more about using layer, you can: