Skip to content

Commit 8a70929

Browse files
committed
Add new talk from Jeff Hammond
1 parent db37a99 commit 8a70929

16 files changed

+1389
-23
lines changed

_data/talks.json

+6-2
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@
171171
"abstract_zh": "Android神经​​网络API(NNAPI)是一个Android C API,旨在在Android设备上运行用于计算机学习的计算密集型操作。 NNAPI旨在为构建和训练神经网络的高级机器学习框架(例如TensorFlow Lite和Caffe2)提供功能的基础层。该API在运行Android 8.1(API级别27)或更高版本的所有Android设备上均可用。基于应用程序的需求和Android设备上的硬件功能,NNAPI可以在可用的设备上处理器(包括专用的神经网络硬件(NPU和TPU),图形处理单元(GPU)和数字信号处理器)上有效地分配计算工作负载(DSP)。",
172172
"duration": "07:50"
173173
},
174-
"jh_parallel-programming": {
174+
"jh_parallel": {
175175
"title": "Heterogeneous parallel programming with open standards using oneAPI and Data Parallel C++",
176176
"title_zh": "使用oneAPI和Data Parallel C ++以开放标准进行异构并行编程",
177177
"author": "Jeff Hammond",
@@ -180,7 +180,11 @@
180180
"bio": "Jeff Hammond is a Principal Engineer at Intel where he works on a wide range of high-performance computing topics, including parallel programming models, system architecture and open-source software. He has published more than 60 journal and conference papers on parallel computing, computational chemistry, and linear algebra software. Jeff received his PhD in Physical Chemistry from the University of Chicago.",
181181
"bio_zh": "Jeff Hammond是英特尔公司的首席工程师,在那里他研究广泛的高性能计算主题,包括并行编程模型、系统架构和开源软件。他在并行计算、计算化学和线性代数软件方面发表了60多篇期刊和会议论文。并在芝加哥大学获得物理化学博士学位。",
182182
"abstract": "Diversity in computer architecture and the unceasing demand for application performance in data-intensive workloads are never-ending challenges for programmers. This talk will describe Intel’s oneAPI initiative, which is an open ecosystem for heterogeneous computing that supports high-performance data analytics, machine learning and other workloads. A key component of this is Data Parallel C++, which is based on C++17 and Khronos SYCL and supports direct programming of CPU, GPU and FPGA platforms. We will describe how oneAPI and Data Parallel C++ can be used to build high-performance applications for a range of devices.",
183-
"abstract_zh": "计算机体系结构的多样性和数据密集型工作负载中对应用程序性能的不断需求,对程序员来说是永无止境的挑战。这次演讲将描述英特尔的oneAPI计划,这是一个开放的、支持高性能数据分析、机器学习和其他工作负载的异构计算生态系统。其中一个关键组件是数据并行c++,它基于c++ 17和Khronos SYCL,支持CPU、GPU和FPGA平台的直接编程。我们将描述如何使用oneAPI和Data Parallel c++为一系列设备构建高性能应用程序。"
183+
"abstract_zh": "计算机体系结构的多样性和数据密集型工作负载中对应用程序性能的不断需求,对程序员来说是永无止境的挑战。这次演讲将描述英特尔的oneAPI计划,这是一个开放的、支持高性能数据分析、机器学习和其他工作负载的异构计算生态系统。其中一个关键组件是数据并行c++,它基于c++ 17和Khronos SYCL,支持CPU、GPU和FPGA平台的直接编程。我们将描述如何使用oneAPI和Data Parallel c++为一系列设备构建高性能应用程序。",
184+
"duration": "12:52",
185+
"video": "https://app.streamfizz.live/embed/ckeijprvxa4pq07313u8gw5kd",
186+
"thumbnail": "https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckeijprvxa4pq07313u8gw5kd/thumbs/thumb-001.jpeg",
187+
"added": "2020-09-01"
184188
},
185189
"yh-xq-dnn": {
186190
"title": "Enabling Distributed DNNs for the Mobile Web Over Cloud, Edge and End Devices",

_includes/related-issues/jh_parallel.html

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -1,2 +1 @@
1-
<div class=related><p>Related conversations on <a href='https://github.com/w3c/machine-learning-workshop/issues'>GitHub</a>:</p><ul><li><a href='https://github.com/w3c/machine-learning-workshop/issues/85'>#85 Packing operations for gemm</a></li>
2-
<li><a href='https://github.com/w3c/machine-learning-workshop/issues/90'>#90 Designing privacy-preserving ML APIs</a></li></ul></div>
1+
<div class=related><p>Related conversations on <a href='https://github.com/w3c/machine-learning-workshop/issues'>GitHub</a>:</p><ul><li><a href='https://github.com/w3c/machine-learning-workshop/issues/90'>#90 Designing privacy-preserving ML APIs</a></li></ul></div>
+2-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
<div class=related><p>Related conversations on <a href='https://github.com/w3c/machine-learning-workshop/issues'>GitHub</a>:</p><ul><li><a href='https://github.com/w3c/machine-learning-workshop/issues/68'>#68 Progressive Enhancement / Graceful degradation</a></li></ul></div>
1+
<div class=related><p>Related conversations on <a href='https://github.com/w3c/machine-learning-workshop/issues'>GitHub</a>:</p><ul><li><a href='https://github.com/w3c/machine-learning-workshop/issues/68'>#68 Progressive Enhancement / Graceful degradation</a></li>
2+
<li><a href='https://github.com/w3c/machine-learning-workshop/issues/90'>#90 Designing privacy-preserving ML APIs</a></li></ul></div>

_includes/talk-list2.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
<details class=talk><summary><div class="grid"><a href="talks/simd_operations_in_webgpu_for_ml.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/cke8ahza81hlp0731tiz1d295/thumbs/thumb-002.jpeg" alt="Watch SIMD operations in WebGPU for ML" width=200 class="tn"></a><a href="talks/simd_operations_in_webgpu_for_ml.html">SIMD operations in WebGPU for ML</a><span class="summary"> by Mehmet Oguz Derin - 5 min <span></span></span></div><span class=added>Added on 2020-08-26</span></summary><p><a href="talks/simd_operations_in_webgpu_for_ml.html">5 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Mehmet Oguz Derin</dd></dl></details>
66
<details class=talk><summary><div class="grid"><a href="talks/accelerated_graphics_and_compute_api_for_machine_learning_directml.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckdobv3o075l707723b397arm/thumbs/thumb-004.jpeg" alt="Watch Accelerated graphics and compute API for Machine Learning - DirectML" width=200 class="tn"></a><a href="talks/accelerated_graphics_and_compute_api_for_machine_learning_directml.html">Accelerated graphics and compute API for Machine Learning - DirectML</a><span class="summary"> by Chai Chaoweeraprasit (Microsoft) - 10 min <span></span></span></div></summary><p><a href="talks/accelerated_graphics_and_compute_api_for_machine_learning_directml.html">10 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Chai Chaoweeraprasit (Microsoft)</dd><dd>Chai leads development of machine learning platform at Microsoft</dd><dt>Abstract</dt><dd>DirectML is Microsoft's hardware-accelerated machine learning platform that powers popular frameworks such as TensorFlow and ONNX Runtime. It expands the framework's hardware footprint by enabling high-performance training and inference on any device with DirectX-capable GPU</dd></dl></details>
77
<details class=talk><summary><div class="grid"><a href="talks/accelerate_ml_inference_on_mobile_devices_with_android_nnapi.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckdo78esr4t0j0772i5r7h1td/thumbs/thumb-001.jpeg" alt="Watch Accelerate ML inference on mobile devices with Android NNAPI" width=200 class="tn"></a><a href="talks/accelerate_ml_inference_on_mobile_devices_with_android_nnapi.html">Accelerate ML inference on mobile devices with Android NNAPI</a><span class="summary"> by Miao Wang (Google) - 7 min <span></span></span></div></summary><p><a href="talks/accelerate_ml_inference_on_mobile_devices_with_android_nnapi.html">7 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Miao Wang (Google)</dd><dd> Software Engineer for Android Neural Networks API</dd><dt>Abstract</dt><dd>The Android Neural Networks API (NNAPI) is an Android C API designed for running computationally intensive operations for machine learning on Android devices. NNAPI is designed to provide a base layer of functionality for higher-level machine learning frameworks, such as TensorFlow Lite and Caffe2, that build and train neural networks. The API is available on all Android devices running Android 8.1 (API level 27) or higher. Based on an app’s requirements and the hardware capabilities on an Android device, NNAPI can efficiently distribute the computation workload across available on-device processors, including dedicated neural network hardware (NPUs and TPUs), graphics processing units (GPUs), and digital signal processors (DSPs).</dd></dl></details>
8-
<details><summary><div class="grid"><span>PENDING </span><a>Heterogeneous parallel programming with open standards using oneAPI and Data Parallel C++</a><span class="summary"> by Jeff Hammond (Intel) <span></span></span></div></summary><dl><dt>Speaker</dt><dd>Jeff Hammond (Intel)</dd><dd>Jeff Hammond is a Principal Engineer at Intel where he works on a wide range of high-performance computing topics, including parallel programming models, system architecture and open-source software. He has published more than 60 journal and conference papers on parallel computing, computational chemistry, and linear algebra software. Jeff received his PhD in Physical Chemistry from the University of Chicago.</dd><dt>Abstract</dt><dd>Diversity in computer architecture and the unceasing demand for application performance in data-intensive workloads are never-ending challenges for programmers. This talk will describe Intel’s oneAPI initiative, which is an open ecosystem for heterogeneous computing that supports high-performance data analytics, machine learning and other workloads. A key component of this is Data Parallel C++, which is based on C++17 and Khronos SYCL and supports direct programming of CPU, GPU and FPGA platforms. We will describe how oneAPI and Data Parallel C++ can be used to build high-performance applications for a range of devices.</dd></dl></details>
8+
<details class=talk><summary><div class="grid"><a href="talks/heterogeneous_parallel_programming_with_open_standards_using_oneapi_and_data_parallel_c_.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckeijprvxa4pq07313u8gw5kd/thumbs/thumb-001.jpeg" alt="Watch Heterogeneous parallel programming with open standards using oneAPI and Data Parallel C++" width=200 class="tn"></a><a href="talks/heterogeneous_parallel_programming_with_open_standards_using_oneapi_and_data_parallel_c_.html">Heterogeneous parallel programming with open standards using oneAPI and Data Parallel C++</a><span class="summary"> by Jeff Hammond (Intel) - 12 min <span></span></span></div><span class=added>Added on 2020-09-01</span></summary><p><a href="talks/heterogeneous_parallel_programming_with_open_standards_using_oneapi_and_data_parallel_c_.html">12 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Jeff Hammond (Intel)</dd><dd>Jeff Hammond is a Principal Engineer at Intel where he works on a wide range of high-performance computing topics, including parallel programming models, system architecture and open-source software. He has published more than 60 journal and conference papers on parallel computing, computational chemistry, and linear algebra software. Jeff received his PhD in Physical Chemistry from the University of Chicago.</dd><dt>Abstract</dt><dd>Diversity in computer architecture and the unceasing demand for application performance in data-intensive workloads are never-ending challenges for programmers. This talk will describe Intel’s oneAPI initiative, which is an open ecosystem for heterogeneous computing that supports high-performance data analytics, machine learning and other workloads. A key component of this is Data Parallel C++, which is based on C++17 and Khronos SYCL and supports direct programming of CPU, GPU and FPGA platforms. We will describe how oneAPI and Data Parallel C++ can be used to build high-performance applications for a range of devices.</dd></dl></details>
99
<details class=talk><summary><div class="grid"><a href="talks/enabling_distributed_dnns_for_the_mobile_web_over_cloud_edge_and_end_devices.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckdobyv0p76q107720jc58j6h/thumbs/thumb-002.jpeg" alt="Watch Enabling Distributed DNNs for the Mobile Web Over Cloud, Edge and End Devices" width=200 class="tn"></a><a href="talks/enabling_distributed_dnns_for_the_mobile_web_over_cloud_edge_and_end_devices.html">Enabling Distributed DNNs for the Mobile Web Over Cloud, Edge and End Devices</a><span class="summary"> by Yakun Huang & Xiuquan Qiao (BPTU) - 9 min <span></span></span></div></summary><p><a href="talks/enabling_distributed_dnns_for_the_mobile_web_over_cloud_edge_and_end_devices.html">9 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Yakun Huang & Xiuquan Qiao (BPTU)</dd><dt>Abstract</dt><dd> This talk introduces two deep learning technologies for the mobile web over cloud, edge and end devices. One is an adaptive DNN execution scheme, which partitions and performs the computation that can be done within the mobile web, reducing the computing pressure of the edge cloud. The other is a lightweight collaborative DNN over cloud, edge and devices, which provides a collaborative mechanism with the edge cloud for accurate compensation.</dd></dl></details>
1010
<details class=talk><summary><div class="grid"><a href="talks/collaborative_learning.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckdobw6x0764l07729krgy57d/thumbs/thumb-001.jpeg" alt="Watch Collaborative Learning" width=200 class="tn"></a><a href="talks/collaborative_learning.html">Collaborative Learning</a><span class="summary"> by Wolfgang Maß (DFKI) - 10 min <span></span></span></div></summary><p><a href="talks/collaborative_learning.html">10 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Wolfgang Maß (DFKI)</dd><dd> Professor at Saarland University and scientific director at DFKI</dd><dt>Abstract</dt><dd> The execution of data analysis services in a browser on devices has recently gained momentum, but the lack of computing resources on devices and data protection regulations are forcing strong constraints. In our talk we will present a browser-based collaborative learning approach for running data analysis services on peer-to-peer networks of devices. Our platform is developed in Javascript, supports modularization of services, model training and usage on devices (tensorflow.js), sensor communication (mqtt), and peer-to-peer communication (WebRTC) with role-based access-control (oauth 2.0).</dd></dl></details>
1111
<details class=talk><summary><div class="grid"><a href="talks/introducing_wasi_nn.html"><img src="https://cjx1uopmt0m4q0667xmnrqpk.blob.core.windows.net/ckdobysfk76ns0772a6u4dbq5/thumbs/thumb-001.jpeg" alt="Watch Introducing WASI-NN" width=200 class="tn"></a><a href="talks/introducing_wasi_nn.html">Introducing WASI-NN</a><span class="summary"> by Mingqiu Sun & Andrew Brown (Intel) - 7 min <span></span></span></div></summary><p><a href="talks/introducing_wasi_nn.html">7 minutes presentation</a></p><dl><dt>Speaker</dt><dd>Mingqiu Sun & Andrew Brown (Intel)</dd><dd> Senior PE at Intel & software engineer at Intel</dd><dt>Abstract</dt><dd> Trained machine learning models are typically deployed on a variety of devices with different architectures and operating systems. WebAssembly provides an ideal portable form of deployment for those models. In this talk, we will introduce the WASI-NN initiative we have started in the WebAssembly System Interface (WASI) community, which would standardize the neural network system interface for WebAssembly programs.</dd></dl></details>

0 commit comments

Comments
 (0)