Blogs

Webinar Recording: An Introduction to OpenCL using AMD GPUs

Join Chris Mason, Product Manager at Acceleware, for an informative introduction to GPU Programming. The tutorial begins with a brief overview of OpenCL and data-parallelism before focusing on the GPU programming model. We also explore the fundamentals of GPU kernels, host and device responsibilities, OpenCL syntax and work-item hierarchy.

Webinar Recording: Asynchronous Operations & Dynamic Parallelism in CUDA

Join Chris Mason, Product Manager at Acceleware, as he leads attendees in a deep dive into asynchronous operations and how to maximize throughput on both the CPU and GPU with streams. Chris demonstrates how to build a CPU/GPU pipeline and how to design your algorithm to take advantage of asynchronous operations. The second part of the webinar focuses on dynamic parallelism.

Technical Paper: Modeling of Electromagnetic Assisted Oil Recovery

Presented at ICEAA - IEEE APWC 2014 this paper features an algorithm for the rigorous analysis of electromagnetic (EM) heating of heavy oil reservoirs. The algorithm combines a FDTD-based EM solver with a reservoir simulator. The paper addresses some of the challenges related to the integration of advanced electromagnetic codes with reservoir simulators and the necessary numerical technology that is needed to be developed, particularly as it relates to the multi-physics, translation of meshes, as well as petro-physical and EM material parameters. The challenges of calculating the electromagnetic dissipation in the vicinity of the antennas is also discussed. Example scenarios are presented and discussed.

Webinar Recording: GPU Architecture & the CUDA Memory Model

Join Chris Mason, Product Manager at Acceleware, and explore the memory model of the GPU! The webinar will begin with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. Chris will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. Features available in the Kepler architecture such as shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed.

State of GPU Virtualization for CUDA Applications 2014

Introduction

Wide spread corporate adoption of virtualization technologies have led some users to rely on Virtual Machines (VMs). When these users or IT administrators wish to start using CUDA, often the first thought is to spin up a new VM. Success is not guaranteed as not all virtualization technologies support CUDA. A survey of GPU virtualization technologies for running CUDA applications is presented. To support CUDA, a VM must be able to present a supported CUDA device to the VM’s operating system and install the NVIDIA graphics driver.

GPU Virtualization Terms 

  • Device Pass-Through: This is the simplest virtualization model where the entire GPU is presented to the VM as if directly connected. The virtual GPU is usable by only one VM. The CPU equivalent is assigning a single core for exclusive use by a VM. VMware calls this mode virtual Direct Graphics Accelerator (vDGA).
  • Partitioning: A GPU is split into virtual GPUs that are used independently by a VM. 
  • Timesharing: Timesharing involves sharing the GPU or portion of between multiple VMs. Also known as oversubscription or multiplexing, the technology for timesharing CPUs is mature while GPU timesharing is being introduced. 
  • Live Migration: The ability to move a running VM from one VM host to another without downtime.

Virtualization Support for CUDA 

CUDA support from five virtualization technology vendors accounting for most of the virtualization market was examined. The five major vendors are VMWare, Microsoft, Oracle, Citrix and Red Hat. A summary is shown in the table below.

New whitepaper: OpenCL on FPGAs for GPU Programmers

In 2012 Altera announced their commitment to developing a SDK that would enable developers to program Altera field-programmable gate arrays (FPGAs) with Open Computing Language (OpenCL). This whitepaper introduces developers who have previous experience with general-purpose computing on graphics processing units (GPUs) to parallel programming targeting Altera FPGAs via the OpenCL framework.

This paper provides a brief overview of OpenCL, highlights some of the underlying technology and benefits behind Altera FPGAs, then focuses on how OpenCL kernels are executed on Altera FPGAs compared to on GPUs. This paper also presents the key differences in optimization techniques for targeting FPGAs.

Click here to download the whitepaper.

Altera Whitepaper

Webinar: An Introduction to CUDA Programming

NVIDIA GTC Express webinar recording from May 28, 2014.

Join Chris Mason, Product Manager at Acceleware, and explore the memory model of the GPU! The webinar will begins with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. Chris will defines shared, constant and global memory and discusses the best locations to store your application data for optimized performance. Features available in the Kepler architecture such as shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed.

GTC 2014 Tutorial Recordings are Online

Great news! NVIDIA has posted the recordings of our CUDA programming and optimization tutorials presented at GTC 2014. As a refresher, here are the topics of the tutorials we presented this year:

Part 1: An Introduction to CUDA Programming (Session S4699)
Taught by Chris Mason
Join us for an informative introduction to CUDA programming. The tutorial will begin with a brief overview of CUDA and data-parallelism before focusing on the GPU programming model. We will explore the fundamentals of GPU kernels, host and device responsibilities, CUDA syntax and thread hierarchy. A programming demonstration of a simple CUDA kernel will be provided.

Part 2: GPU Architecture & the CUDA Memory Model (Session S4700)
Taught by Chris Mason
Explore the memory model of the GPU! The session will begin with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. We will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. Features available in the Kepler architecture such as the shuffle instruction, shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed. A programming demonstration of shared and constant memory will be delivered.

NVIDIA CUDA 6.0 Unified Memory Performance

One of the new features introduced by NVIDIA in CUDA 6.0 is Unified Memory, which simplifies GPU code, while also maximizing data access speed by transparently managing memory between the CPU and GPU. Based on the examples provided by NVIDIA in the CUDA C Programming Guide, it’s easy to see how Unified Memory simplifies code, however the actual performance is a bit of a mystery. Time to do some investigation!

We begin the performance test by using the sample code provided in NVIDIA’s CUDA C Programming Guide (Section J.1.1)

 

__global__ void AplusB( int *ret, int a, int b) {
   ret[threadIdx.x] = a + b + threadIdx.x;
}
   
int main() {
   int *ret;
   cudaMalloc(&ret, 1000 * sizeof(int));
   
   AplusB<<< 1, 1000 >>>(ret, 10, 100);
   
   int *host_ret = (int *)malloc(1000 * sizeof(int));
   cudaMemcpy(host_ret, ret, 1000 * sizeof(int), cudaMemcpyDefault);
   
   for(int i=0; i<1000; i++)
      printf("%d: A+B = %d\n", i, host_ret[i]);
   
   free(host_ret);
   cudaFree(ret);
   return 0;
}

Listing 1: Original code with explicit memory transfers

 

__global__ void AplusB( int *ret, int a, int b) {
   ret[threadIdx.x] = a + b + threadIdx.x;
}
   
int main() {
   int *ret;
   cudaMallocManaged(&ret, 1000 * sizeof(int));
   
   AplusB<<< 1, 1000 >>>(ret, 10, 100);
   cudaDeviceSynchronize();
   
   for(int i=0; i<1000; i++)
      printf("%d: A+B = %d\n", i, ret[i]);
   cudaFree(ret);
   return 0;
}

Listing 2: Unified Memory version

AMD FirePro W9100 and OpenCL Updates

AMD FirePro W9100

Last week AMD announced the FirePro W9100. This new professional workstation GPU compares favorably to NVIDIA's Tesla K40:

  AMD FirePro W9100 NVIDIA Tesla K40
(745MHz default clock)
NVIDIA Tesla K40
(845MHz clock with GPUBoost)
Memory (GB) 16 12 12
Peak Single Precision
Throughput (TFLOPS)
5.24 4.29 5.04
Peak Double Precision
Throughput (TFLOPS)
2.62 1.43 1.68

Pages

Subscribe to RSS - blogs