Nvidia's RAPIDS.ai: Massively Accelerated Modern Data-Science | AISC
Автор: LLMs Explained - Aggregate Intellect - AI.SCIENCE
Загружено: 2020-07-02
Просмотров: 511
Описание:
Speaker(s): Griffin Lacey, Mukundhan Srinivasan
Facilitator(s): Alireza Darbehani
Find the recording, slides, and more info at https://ai.science/e/rapids-massively...
Motivation / Abstract
Why should you attend this talk?
Using RAPIDS and GPUs users can see their data science models run 100x faster or more, with little to no code changes required.
The RAPIDS suite of open-source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs.
Seamlessly scale from GPU workstations to multi-GPU servers and multi-node clusters with Dask.
Accelerate your Python data science toolchain with minimal code changes and no new tools to learn.
Increase machine learning model accuracy by iterating on models faster and deploying them more frequently.
Drastically improve your productivity with more interactive data science tools like XGBoost.
RAPIDS is an open-source project. Supported by NVIDIA, it also relies on numba, apache arrow, and many more open source projects.
What was discussed?
Introduction to GPUs and how it is possible to get such incredible speedups with minimal code changes.
Overview of popular RAPIDS tools such as GPU-accelerated Pandas (cuDF) and Sci-Kit Learn (cuML).
Guidance on how and where to get started.
------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: