CG数据库 >> Deploying Machine Learning Models as Microservices Using Docker

Deploying Machine Learning Models as Microservices Using DockerMP4 | Video: AVC 1920x1080 | Audio: AAC 48KHz 2ch | Duration: 24M | 825 MBGenre: eLearning | Language: EnglishModern applications running in the cloud often rely on REST-based microservices architectures by using Docker containers.

Docker enables your applications to communicate between one another and to compose and scale various components.

Data scientists use these techniques to efficiently scale their machine learning models to production applications.

This video teaches you how to deploy machine learning models behind a REST API—to serve low latency requests from applications—without using a Spark cluster.

In the process, you'll learn how to export models trained in SparkML; how to work with Docker, a convenient way to build, deploy, and ship application code for microservices; and how a model scoring service should support single on-demand predictions and bulk predictions.

Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; cloud platforms like Amazon Web Services; Bash, Docker, and REST.

Understand how to deploy machine learning models behind a REST APILearn to utilize Docker containers for REST-based microservices architecturesExplore methods for exporting models trained in SparkML using a library like Combust MLeapSee how Docker builds, deploys, and ships application code for microservicesDiscover how to deploy a model using exported PMML with a REST API in a Docker containerLearn to use the AWS elastic container service to deploy a model hosting server in DockerPick up techniques that enable a model hosting server to read a model


Deploying Machine Learning Models as Microservices Using Docker的图片1
Deploying Machine Learning Models as Microservices Using Docker的图片2

发布日期: 2017-12-14