Go to Page # Page of 42

High Fidelity, High Performance Computing Visualization

 Jim Jeffers
  7th-Jul-2015
Description: Trend 1: Increasing Data Size: Problem- Able to measure and model increasingly complex phenomena, Big data outgrowing peripheral memory, Performance Implications, Less practical to move the data over a WAN, Security issues when moving proprietary data out of the data center, Interactivity is key for scene understanding but harder to achieve. Image Quality Implications- Increased spatial and / or temporal resolution (more to look at), Can be more challenging to interpret visually. Trend 2: Increasing Shading Complexity - Improved illumination effect(shadows) needed to aid in scene understanding, Volumetric effects are also increasingly common (e.g., oil and gas applications), Difficult to compute efficiently in OpenGL* but intrinsic to ray tracing / ray casting. Our Focus- High performance high fidelity rendering in software, improve performance for existing apps, enable new apps with higher fidelity and performance, Enable efficient usage of compute cluster resources, general purpose rendering on compute nodes, lower cost, reduce I/O, flexible resource allocation, in-situ / interactive rendering, Enable scalable performance from workstation to cluster, Large data, large data, large data. Rendering is a Highly Parallel Workload- Rendering exhibits substantial task and data parallelism, Pixels can be rendered independently of one another, Tiles of pixels are commonly rendered on different hardware threads, Vectorization can be exploited within a pixel or across pixels. Rendering on Intel® Architectures- Performance (Example: Ray Tracing): Ray tracing often operates on hierarchical data structures, Traversal requires fine grained data-dependent branching, High utilization challenging on very wide vector architectures, Favors many cores, single-threaded performance, large register files. Memory- Large data rendering often requires large memory, Intel® Xeon PhiTM coprocessor scales to 16GB, Intel® Xeon® processor to 256GB+, Shared memory systems to multiple TB. Summary: High Fidelity Visualization in Software: Scalable: from workstations to clusters, supports next generation data sizes, Flexible: cluster nodes can be used for compute or visualization on demand, Cost effective: dedicated GPU coprocessors may be unnecessary.
Views: 1650
Domain: Electronics
Category: Displays
Maxims of Tech: Rules of Engagement for a Fast Changing Environment
Contents:
High Fidelity, High Performance Computing
Visualization Enabled on Intel® Architecture
Jim Jeffers
Principal Engineer, Manager, Parallel Visualization
Engineering, Intel Corporation
BIGS003

Agenda
• Rendering Markets and Trends

• Intel’s High Fidelity Visualization Solutions Overview
• Professional Rendering Solution – Embree [Available Today]

• Technical Co ... See more

Parallel Programming Pearls

From ‘Correct’ to ‘Correct & Efficient’: A Hydro2D case study with Godunov’s scheme Volume 1, Chapter 2- Guillaume Colin de Verdière and Jason D. Sewall: Real-world code developed

...
17 February, 2016
...
07 October, 2014