Important Speaker Instructions for Edge 2016

Important Speaker Instructions for Edge 2016

Cloud Native Computing Technology supporting HPC/Cognitive Workflows Bruce DAmora, Seelam Seelam, Yong Feng September 21, 2016 2016 IBM Corporation #ibmedge Please Note: IBMs statements regarding its plans, directions, and intent are subject to change or withdrawal without notice and at IBMs sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the users job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here. #ibmedge 2

Background and Outline Outline Big Data Challenging both Cognitive and Traditional Workloads Next Generation of HPC and Cognitive Workflows IBM Container Cloud Requirements for a HPC/Cognitive Workflows Enhanced IBM HPC/Cognitive Container Cloud #ibmedge 3 Big Data Challenging both Cognitive and Traditional Workloads 2016 IBM Corporation #ibmedge 4 Data is the New Basis of Competitive Value The world is awash in data sensor, modeling, social Pools of dark data collected but un-analyzed Genomics

Oil and Gas Financial Analysis 15 PBs of survey data 10+ months to process new survey data once acquired 100x compute needed for deep water imaging Network and high bandwidth storage intensive Long term storage needs Multi-TB data sets for visualization 1 TB per oilfield per day 2 TB per rig per day 80% Dark Data* Personalized medicine Next Gen Sequence data doubling every 5 months Some leading institutions @ >100PB in 1-2 years Projects increasingly National scale (UK, China, UAE, US) Large cluster and Large SMP

needs High performance file system and scalable storage Multi-stage workflow process for personalized treatment 4 TB/day 10k to 100k databases Multi PB volume 400M tasks per day <100ms response required Tight coupling of near real time analytics and trading systems to underlying data sets. Firms of all sizes are experiencing data challenges as they focus on insight #ibmedge 5 High Performance Computing to High Performance Insights Massive data requirements drive a composite architecture for big data, complex analytics, modeling and simulation. A data centric

architecture will appeal to segments experiencing an explosion of data and the associated computational demands Business Analytics Financial Analytics Business Intelligence Oil and Gas Technical Computing Social Analytics Climate & Environment Science Life Sciences Cloud Meets Data Centric Cognitive Systems Big Data

Cloud #ibmedge Analytics Watson 6 HPC/Cognitive Workflows: Mixed Compute Capabilities Required Oil and Gas Analytics Capability: Complex code Data Dependent Code Paths & Computation Lots of indirection, i.e. pointer chasing 4-40 Racks Seismic

Reservoir 2-20 Racks Value At Risk Analytics Often Memory System Latency Dependent All Source Analytics Graph Analytics Limited threading opportunity Image Analysis Science Limited scalability Ops dominated (e.g. DGEMM, Linpack) Simple data access patterns

1-5+ Racks C++ templated codes Limited opportunity for vectorization Massively Parallel Compute Capability: Simple kernels Financial Analytics Can be preplanned for high performance 1-100s Racks Throughput Capability Heterogeneous Solutions Essential to meeting Power and COST requirements #ibmedge 7

Next Generation Cognitive and HPC Workflows 2016 IBM Corporation #ibmedge 8 Spark for Big Data, Analytics and Machine Learning Data Data batch services workloads PI M frameworks Mesos (master + slave kernels) GPFS or some distributed parallel FS Server resources

CPUs, IB, GPUs, NVMe, FPGA #ibmedge resource mgr filesystem servers 9 Machine Learning Workflow in the Cloud Training Scoring run as micro-services in the cloud for autoscaling & high availability often done offline on dedicated platform Largely manual, varying degrees of automation Skill- and labor-intensive Custom training methodology (little standardization) Training

Training Data Data Prep Feature Extraction Operations Training Evaluation Ingestion Trained model Trained model Cognitive Cognitive Engine Engine and Curation

Cloud (CSF) Corpora may be state-full (online learning) interactive responsive compute-intensive data-intensive state-full long-running #ibmedge GPU Acceleration of Spark MLlib Spark Application MLlib JNI Accelerated Kernel Leverage the best of Spark (scale-out) and GPU (scale-up) Growing interest in Spark/MLlib as an easy scale-out platform for machine learning Yet, current versions of popular machine learning kernels in MLlib lack performance

Approach: Provide new CUDA kernel implementations to offload computation - Example MLlib/ALS Alternating Least Squares Team: Wei Tan, Liana Fong, David Kung, Rajesh Bordawekar, Benjamin Herta, Minsik Cho, Ruchir Puri #ibmedge 11 Example in Development Engineering Design Problem Computational Chemistry: Automated Optimization of Force-fields Accurate force-fields underpin most work defined in the work packages Force fields are often parameterized to perform one task well Can lead to a set of parameters which has low transferability Computational / human expense of parameterizing a force field has led to out of set use Automated parameterization allows the cost of parameterization to be borne by HPC (with DCC acceleration) Human cost minimized by easy to use front end #ibmedge Potential Implementation Cloud User

Relevant data retrieved User defines problem Cognitive optimizer generates parameters Database of calculations searched to avoid replication Final parameters returned to user Remaining calculations run on HPC resource On-premise or in public cloud #ibmedge HPC Transforming Seismic Workflows to the Cloud

#ibmedge Scheduling in Cloud with Marathon and Mesos #ibmedge 15 IBM Container Cloud 2016 IBM Corporation #ibmedge 16 Architecture Approach: IBM Cloud Container Cloud Basic open source stack including a minimal set of features that IBM is deploying for on premise and SoftLayer cloud Core component of IBM Cloud container technology Developed in Research in collaboration with Cloud development teams

Basis for the future container cloud that will support HPC, Cloud, Cognitive/DLaaS, Analytics clients needing on premise or SoftLayer support Open sourcing to gain support of broader internal and external community Goal is to have a single stack that supports OnPrem and Public Cloud activities #ibmedge 17 Adding a Traditional Batch Scheduler IBM Platform LSF is a traditional batch scheduler used in the HPC domain Mesos + (Marathon | Swarm | Kubernetes) are dominant in Cloud space Create a platform which can support both traditional highly scalable batch schedulers (LSF) and cloud native frameworks like Mesos + (Marathon | Swarm | Kubernetes) Need to coordinate resources between the various job launch frameworks Prototyping for cooperative HPC and cloud native job schedulers in process #ibmedge 18

Container Cloud for HPC, Cognitive, and Analytics Workflows HPC, Cognitive, Cloud, Systems, and Software teams developing Container Cloud Stack IBM Container Stack Prototype Stack Building a much larger prototype at IBM POK 100 nodes IB, GPUs, Parallel filesystems, Power 8 Prototype DCS/Cognitive Software Stack Docker, Mesos components Platform/Cloud integration HPC S/W stack IB Optimized Spark HPC/Cognitive H/W Power 8 nodes Nvidia K40 GPUs GPU drawers NVMe drawers IB, 10GbE, 1GbE #ibmedge HPC/Cogntive/Analytics Stack Mesos, Docker, Marathon, Kubernetes with GPU support Better integration of Platform Components Enhancements to Mesos with MPI services For workflows of interest to our clients, virtualization is not relevant Worflows span 10s to 1000s of individual nodes.

19 What Use Cases do we support today? Traditional HPC MPI job on multi-node cluster LSF scheduler, i.e. bsub User User LSF LSF Master Master User submits job e.g. DL_MESO LSF master queries EGO for compute resources Compute Compute DL_MESO DL_MESO MPI MPI Apache Mesos workflow

User User User running Spark or Docker swarm. Swarm allows the cluster to look like one big machine to docker container Mesos Mesos Master Master Mesos master queries EGO for compute resources EGO EGO Master Master Host Host Factory Factory Mesos

Mesos EGO EGO plugin plugin Compute Compute spark spark Compute Compute DL_MESO DL_MESO MPI MPI EGO EGO Master Master Compute Compute spark spark LSF master communicates with EGO via

Host Factory EGO coordinates resources Compute Compute DL_MESO DL_MESO MPI MPI Mesos master communicates with EGO via Mesos EGO plugin EGO coordinates resources Compute Compute spark spark EGO coordinates all the resources between Mesos and Platform LSF #ibmedge

20 Requirements for HPC/ Cognitive Workflows 2016 IBM Corporation #ibmedge 21 Toward HPC/Cognitive in the Cloud #ibmedge 22 Critical Elements for HPC/Cognitive workloads GPU support is critical for simulation, modeling, machine/deep learning CUDA MPS support Parallel filesystem support GPFS High performance networking 10GbE, IB High performance scalable schedulers and resource managers IBM Platform LSF used for traditional high performance applications that run to completion

Cloud native frameworks to support workflows of applications Support for distributed applications MPI/OpenMP is pervasive in HPC world Performance sensitive workloads leverage processor and/or thread affinity Cloud frameworks dont expose this to the end-user #ibmedge 23 GPU support in the Cloud GPU acceleration is important requirement for HPC and Cognitive workflows Considerations for containerization (Docker) of CUDA or OpenGL applications Both CUDA and OpenGL have a toolkit and kernel module component Toolkit component can be containerized, but kernel module exists on native OS Exposing GPU at Mesos resource management layer required modifications to Mesos Exposed as resource from Marathon On-going work to integrate into Kubernetes Support for both Power and x86 platforms #ibmedge 24 Parallel Filesystems Current Docker support to bind volumes to container local directories

Run a container using volume bind option to attach GPFS file system to containers local file system GPFS volume driver is an alternative use case Add GPFS support to Docker so it can automatically use native GPFS filesystem Parallel filesystems on demand Layering parallel fs on local storage device Instantiate a parallel file system in a container on local storage local hard disk local NVMe drive Experimenting with OrangeFS, #ibmedge 25 MPI is a key requirement for HPC/Cognitive MPI is prevalent across high performance/technical computing applications today Some ML frameworks are distributed with MPI, e.g. Torch and Caffe Developers often use different MPI implementations Spectrum MPI, OpenMPI, MPICH2, MVAPCH.. Facilitated by containerization of MPI development environment Expectation is that MPI will leverage host networks for fastest communications IB, 10GbE commonly used today Running MPI applications from Docker containers across nodes is not currently supported Specifically, encapsulating the MPI environment and mpirun ENTRYPOINT in a container

#ibmedge 26 Traditional Batch Schedulers Provide framework for launching MPI distributed applications across cluster of nodes Common frameworks such as LSF, slurm, PBS, Torque, Cobalt, etc. work well with MPI Support task and affinity mapping Can be configured to launch CUDA multi-process services on an application basis #ibmedge 27 IBM Container Cloud with HPC/Cognitive Enhancements 2016 IBM Corporation #ibmedge 28 Develop support for Docker applications requiring MPI Premise: MPI libraries should be encapsulated in container similarly to other application dependencies Different flavors of MPI exists and developers regularly switch

Shift burden away from Ops to Dev to support MPI in container Challenges MPI includes a run-time environment as well as a build environment MPI run-time environment must be started on each node required to run the application at application (container) run-time Run-time environment must be started from within the container Master MPI runtime must launch processes in containers across nodes Solution Develop MPI services that integrate with Swarm, Marathon, Kubernetes #ibmedge 29 HPC Use Cases: Batch Scheduler Frameworks and Docker Batch scheduler used to launch docker containers requiring MPI LSF (Example: bsub ) Currently does not support MPI in container MPI support for container scheduled for late August Initially only MPI tasks launched from a single container can use shared memory Affinity mapping should work as it does today in LSF How does docker pull work on compute nodes that dont have access to internet? Prototype uses LSF JOB_STARTER script to execute docker run Need support to plug-in other schedulers SLURM(LLNL), COBALT(ANL), PBS(ORNL) #ibmedge

01/27/2020 30 HPC Use Cases: Cloud native frameworks and Docker Docker container MPI application launched via cloud native frameworks Marathon, Kubernetes, Swarm, or some TBD HPC/Cognitive framework Container encapsulates application and dependencies including MPI mpirun HOSTFILE provided by Marathon, Kubernetes, Swarm,etc. Expect to run on host network Expect to use host parallel filesystem and/or object storage Expect MPI tasks within a container to use shared memory for message passing No shared memory for MPI tasks running in multiple containers on a node #ibmedge 01/27/2020 31 Proposed MPI Service Components HPC Workflows Composition Manager MPI Service Select container framework Launch MPI container framework service including

cmdline params include containerized app mpi runtime options other application options MPI Service Docker Swarm Kubenetes Marathon MPI-Container Framework Service MPI-Container Framework Service MPI-Container Framework Service MPI-Container framework service uses native discovery service for containers Launch Mesos-MPI Service Send mpirun resource reqts to container framework Recv MPI hostlist from Mesos-MPI Mesos Master (with GPU support)

Docker Engine Host OS Mesos-MPI service dont really need another service at this level framework can communicate directly with Mesos Master Recv job resource requirements from MPI-Container framework service Recv MPI hostlist from Mesos Master Launch application container across nodes in hostlist master node #ibmedge 32 Enhanced MPI/Docker services Enhanced GPUs, accelerators, extendedmemory services DLaaS services Shared SharedQuota Quota Mgmt Mgmt

Service ServiceLB LB Controller Controller Network Network Controller Controller UAA/Login UAA/Login Traditional batch (Platform LSF) and cloud native resource mgmt. services API API Proxy Proxy Architecture leveraging base IBM Container Cloud software stack UI UI

Dashboard Dashboard Goal: IBM HPC/Cognitive Container Cloud Stack LSF Batch Scheduler Kuberntes MPI Docker extensions Session Scheduler Mesos Docker Executor(s) Docker Engine Mesos Master (with GPU support) Docker Engine Crawler (web crawler)

Mesos Agent (with GPU support) Platform Resource Scheduler MPI Docker extensions Logmet Logmet MPI Docker extensions HPC Worfklow Composition Manager Docker Swarm Private Private Registry Registry GPU GPU extensions extensions Fluentd OVN Agent

Kubelet GPU GPU extensions extensions Flanneld Heapster Kuryr Resource Scheduler Agent Host OS Host OS master node compute nodes Improvements in component scalability Platform MPI Docker

extensions Shared Shared Storage Storage #ibmedge 33 Thank You 2016 IBM Corporation #ibmedge Notices and Disclaimers Copyright 2016 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO, LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided. IBM products are manufactured from new parts or new and used parts. In some cases, a product may not be new and may have been previously installed. Regardless, our warranty terms apply. Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice. Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those

customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation. It is the customers responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customers business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law #ibmedge Notices and Disclaimers Cont. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBMs products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. The provision of the information contained h erein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right. IBM, the IBM logo, ibm.com, Aspera, Bluemix, Blueworks Live, CICS, Clearcase, Cognos, DOORS, Emptoris, Enterprise Document Management System, FASP, FileNet, Global Business Services , Global Technology Services , IBM ExperienceOne, IBM SmartCloud, IBM Social Business, Information on Demand, ILOG, Maximo, MQIntegrator, MQSeries, Netcool, OMEGAMON, OpenPower, PureAnalytics, PureApplication, pureCluster, PureCoverage, PureData, PureExperience, PureFlex, pureQuery, pureScale, PureSystems, QRadar, Rational, Rhapsody, Smarter Commerce, SoDA, SPSS, Sterling Commerce, StoredIQ, Tealeaf, Tivoli, Trusteer, Unica, urban{code}, Watson, WebSphere, Worklight, X-Force and System z Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml. #ibmedge

Recently Viewed Presentations

  • Counter-Argument

    Counter-Argument

    What is a Counter-Argument? When you write an academic essay, you make an argument. Your thesis statement and support. When you counter-argue, you consider a possible argument against your thesis or some aspect of your reasoning.
  • Bilag 12 Den gravides ideelle fremtidige digitalt understttede

    Bilag 12 Den gravides ideelle fremtidige digitalt understttede

    Undervejs forløbet får Jannie beskeder i appens "min graviditets"-del om, hvad der sker med fosteret, kvinden og familien i denne del af graviditeten, både i forhold til det fysiske, i forhold til følelser, det familiemæssige og det sociale.
  • gdansk - Portland State University

    gdansk - Portland State University

    It would be perhaps best to introduce a rigorous although simplified "Quantum Mechanics with philosophical aspects" course in high schools. ... matter exists include: Paul Dirac, David Bohm, Steven Hawking Richard Feynman. ... in one of these three types of...
  • FileNewTemplate - Nassau BOCES

    FileNewTemplate - Nassau BOCES

    Manage Resources and Make Student Learning More Effective Destiny® Resource Manager™ LeeAnn started with a 1:1 program and is growing the number of areas she is tracking and managing with the use of Destiny Resource Manager * * Headlines or...
  • General Packet Radio Service (GPRS)

    General Packet Radio Service (GPRS)

    Gateway GPRS Support Node GGSN Typically located at one of the MSC sites One (or few) per operator Main functions Interface to external data networks Resembles to a data network router Forwards end user data to right SGSN Routes mobile...
  • CS121 - faculty.otterbein.edu

    CS121 - faculty.otterbein.edu

    Texturing. We've got polygons, but they are all one color. At most, we could have different colors at each vertex. We want to "paint" a picture on the polygon. Because the surface is supposed to be colorful. To appear as...
  • Chapter 2 Intelligent Agent

    Chapter 2 Intelligent Agent

    Properties of task environments Known vs. unknown This distinction refers not to the environment itslef but to the agent's (or designer's) state of knowledge about the environment. -In known environment, the outcomes for all actions are given. ( example: solitaire...
  • Title Electric cooling from room temperature down to

    Title Electric cooling from room temperature down to

    A 7 kW SINIS thermometer IV curve at Tph=290 mK without cooling (X-es), and under electron cooling (circles) Electron cooling starting from phonon temperatures in the range of 287-365 mK The IV curve of SIN junction have simple analythic form...