Building Transformer-Based Natural Language Processing Applications (BNLPA)

 

Course Overview

Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.

In this course, you'll:

  • Construct a Transformer neural network in PyTorch
  • Build a named-entity recognition (NER) application with BERT
  • Deploy the NER application with ONNX and TensorRT to a Triton inference server

Upon completion, you’ll be proficient in task-agnostic applications of Transformer-based models.

Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.

Course Content

Introduction
  • Meet the instructor.
  • Create an account at courses.nvidia.com/join
Introduction to Transformers
  • Explore how the transformer architecture works in detail:
  • Build the transformer architecture in PyTorch.
  • Calculate the self-attention matrix.
  • Translate English to German with a pretrained transformer model.
Self-Supervision, BERT, and Beyond

Learn how to apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo:

  • Build a text classification project to classify abstracts.
  • Build a NER project to identify disease names in text.
  • Improve project accuracy with domain-specific models.
Inference and Deployment for NLP
  • Learn how to deploy an NLP project for live inference on NVIDIA Triton:
  • Prepare the model for deployment.
  • Optimize the model with NVIDIA® TensorRT™.
  • Deploy the model and test it.
Final Review
  • Review key learnings and answer questions.
  • Complete the assessment and earn a certificate.
  • Take the workshop survey.
  • Learn how to set up your own environment and discuss additional resources and training.

Prerequisites

  • Experience with Python coding and use of library functions and parameters
  • Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras
  • Basic understanding of neural networks

Course Objectives

  • How transformers are used as the basic building blocks of modern LLMs for NLP applications
  • How self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results
  • How to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
  • Manage inference challenges and deploy refined models for live applications

Prices & Delivery methods

Online Training

Duration
1 day

Price
  • US$ 500
Classroom Training

Duration
1 day

Price
  • United States: US$ 500

Click on town name or "Online Training" to book Schedule

This is an Instructor-Led Classroom course
Instructor-led Online Training:   This is an Instructor-Led Online (ILO) course. These sessions are conducted via WebEx in a VoIP environment and require an Internet Connection and headset with microphone connected to your computer or laptop.
This is a FLEX course, which is delivered simultaneously in two modalities. Choose to attend the Instructor-Led Online (ILO) virtual session or Instructor-Led Classroom (ILT) session.

United States

Online Training 09:00 US/Eastern Enroll

Canada

Online Training 09:00 Canada/Eastern Enroll