Submitted by Dan Cyca on Mon, 2015-11-16 14:08
Tesla M40 and Tesla M60 - A New Epoch for GPU Computing
One scientific epoch ended and another began with James Clerk Maxwell
Submitted by axewebadmin on Mon, 2015-11-02 11:40
Exhibiting at SEG 2015 in New Orleans
This past month Acceleware attended the Society of Exploration Geophysicists’ 85th Annual Meeting and International Exposition in New Orleans, Louisiana. This show brought in over 8,000 professionals from 70 different countries, and Acceleware presented multiple in-booth talks about our advancements in seismic imaging software and high performance computing for the oil and gas industry.
Submitted by axewebadmin on Wed, 2015-06-24 16:45
Exhibiting at EAGE 2015 in Madrid
The annual European Association of Geoscientists & Engineers conference and exhibition took place in beautiful Madrid, Spain from June 1-4. In its’ 77th year, over 6,500 delegates attended this show, and Acceleware was there to take it all in.
This year, the Acceleware booth had conducted dynamic talks on our latest developments focusing on 3 of our products; AxFWI, AxWave, and AxRTM.
Submitted by axewebadmin on Wed, 2015-04-15 16:21
AxFWI is a revolutionary modular FWI platform that enables users to accelerate their research by integrating their own algorithms and code to a highly optimized RTM engine. The easy to use interface gives the user the control and flexibility required to run many different scenarios and yet benefit from a platform engineered for maximum performance.
Submitted by Dan Cyca on Mon, 2015-04-06 10:00
CUDA developers generally strive for coalesced global memory accesses and/or explicit ‘caching’ of global data in shared memory. However, sometimes algorithms have memory access patterns that cannot be coalesced, and that are not a good fit for shared memory. Fermi GPUs have an automatic L1 cache on each streaming multiprocessor (SM) that can be beneficial for these problematic global memory access patterns. First-generation Kepler GPUs have an automatic L1 cache on each SM, but it only caches local memory accesses. In these GPUs, the lack of automatic L1 cache for global memory is partially offset by the introduction of a separate 48 KB read-only (née texture) cache per SM.
Opt-In L1 Caching on Kepler GPUs
NVIDIA quietly re-enabled L1 caching of global memory on GPUs based on the GK110B, GK20A, and GK210 chips. The Tesla K40 (GK110B), Tesla K80 (GK210) and Tegra K1 (GK20A) all support this feature. You can programmatically query whether a GPU supports caching global memory operations using
cudaGetDeviceProperties and examining the
globalL1CacheSupported property. Examining the Compute Capability alone is not sufficient; Tesla K20/K20x and Tesla K40 both support Compute Capability 3.5, but only the K40 supports caching global memory in L1.
Submitted by Chris Mason on Mon, 2014-10-27 07:52
Join Chris Mason, Product Manager at Acceleware, and learn how to optimize your algorithms for NVIDIA GPUs. This informative webinar provides an overview of the improved analysis performance tools available in CUDA 6.0 and key optimization strategies for compute, latency and memory bound problems. The webinar includes techniques for ensuring peak utilization of CUDA cores by choosing the optimal block size. For compute bound algorithms Chris discusses how to improve branching efficiency, intrinsic functions and loop unrolling. For memory bound algorithms, optimal access patterns for global and shared memory are presented, including a comparison between the Fermi and Kepler architectures.
Submitted by Chris Mason on Wed, 2014-09-24 10:34
Join Chris Mason, Product Manager at Acceleware, for an informative introduction to GPU Programming. The tutorial begins with a brief overview of OpenCL and data-parallelism before focusing on the GPU programming model. We also explore the fundamentals of GPU kernels, host and device responsibilities, OpenCL syntax and work-item hierarchy.
Submitted by Chris Mason on Tue, 2014-09-09 10:38
Join Chris Mason, Product Manager at Acceleware, as he leads attendees in a deep dive into asynchronous operations and how to maximize throughput on both the CPU and GPU with streams. Chris demonstrates how to build a CPU/GPU pipeline and how to design your algorithm to take advantage of asynchronous operations. The second part of the webinar focuses on dynamic parallelism.
Submitted by axewebadmin on Fri, 2014-08-15 12:09
Presented at ICEAA - IEEE APWC 2014 this paper features an algorithm for the rigorous analysis of electromagnetic (EM) heating of heavy oil reservoirs. The algorithm combines a FDTD-based EM solver with a reservoir simulator. The paper addresses some of the challenges related to the integration of advanced electromagnetic codes with reservoir simulators and the necessary numerical technology that is needed to be developed, particularly as it relates to the multi-physics, translation of meshes, as well as petro-physical and EM material parameters. The challenges of calculating the electromagnetic dissipation in the vicinity of the antennas is also discussed. Example scenarios are presented and discussed.
Submitted by Chris Mason on Thu, 2014-07-24 13:46
Join Chris Mason, Product Manager at Acceleware, and explore the memory model of the GPU! The webinar will begin with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. Chris will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. Features available in the Kepler architecture such as shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed.