Acceleware offers advanced CUDA training courses for NVIDIA GPUs delivered by the industry’s most experienced instructors. Since 2008, Acceleware has delivered detailed instruction to hundreds of programmers needing to achieve maximum performance from compute tasked GPUs. This is the industry leading course on GPU programming!
Clients will access our top rated training techniques for parallel programming in CUDA, OpenCL, MPI, Microsoft HPC Server, Visual Studio and many others. Acceleware's training consists of classroom lectures and several practical hands-on exercises using supplied laptops equipped with NVIDIA GPUs.
We recommend that the attendees have a background C/C++ (2 or more years) in order to get the most out of the course. Contact firstname.lastname@example.org if you are interested in a beginner level CUDA courses.
Attendees should be familiar with
Entirely optional (but helpful) experiences:
4 Day Course Syllabus
- Overview of GPU computing
- Data-parallel architectures and the GPU programming model
- GPU memory model & thread cooperation
- Hands-on exercises: GPU memory management, simple CUDA kernels and shared memory and constant memory
- Asynchronous operations
- Advanced CUDA features
- Debugging GPU Programs
- Hands-on-exercises: Asynchronous operations, CUDA features, experience with CUFFT, CUBLAS, Thrust, debugging
- Introduction to optimization
- Resource management, latency and occupancy
- Memory performance optimizations
- Hands-on exercises: Arithmetic optimizations, occupancy calculator and memory access patterns
- Profiling applications
- Hands-on exercises: Case study exercise and OpenACC
- Case study: Finite difference stencil algorithm or monte carlo simulations
Contact us for pricing information and to schedule your training session.
Your fee includes
- Use of a laptop equipped with CUDA capable GPU
- Choice of Windows and Linux OS
- Printed manual of all lectures
- Electronic copy of lab exercises
- Certificate of completion
- Beverages and snacks
- 90 days post training support (conditions apply)