DevOps Articles

Curated articles, resources, tips and trends from the DevOps World.

Tutorial: Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano a

4 years ago thenewstack.io
Tutorial: Deploying TensorFlow Models at the Edge with NVIDIA Jetson Nano a

Summary: This is a summary of an article originally published by The New Stack. Read the full original article here →

In this tutorial, we will explore the idea of running TensorFlow models as microservices at the edge. For the completeness of the tutorial, we will run a single node K3s on Jetson Nano.

Check the version of Docker runtime with the below command: Since Docker supports custom runtimes, we can use the standard Docker CLI with --runtime nvidia switch to use NVIDIA’s container runtime.

For the AI workloads running in K3s, we need access to the GPU which is available only through the nvidia-docker runtime.

We will also add a couple of other switches which makes it easy to use the kubectl CLI with K3s.

Made with pure grit © 2024 Jetpack Labs Inc. All rights reserved. www.jetpacklabs.com