• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured news / Summary of All SC14 Workshop Submissions With Important Dates – August Deadlines Fast Approaching

Summary of All SC14 Workshop Submissions With Important Dates – August Deadlines Fast Approaching

August 5, 2014 by Rob Farber Leave a Comment

Hurry to submit your work! Following is a current list of the SC14 workshop submission annotated with the important dates. Please check the actual workshop websites for updates.

The 9th Parallel Data Storage Workshop (PDSW)

Abstract:  Peta- and exascale computing infrastructures make unprecedented demands on storage capacity, performance, concurrency, reliability, availability, and manageability. This one-day workshop focuses on the data storage and management problems and emerging solutions found in peta- and exascale scientific computing environments, with special attention to issues in which community collaboration can be crucial for problem identification, workload capture, solution interoperability, standards with community buy-in, and shared tools. Addressing storage media ranging from tape, HDD, and SSD, to new media like NVRAM, the workshop seeks contributions on relevant topics, including but not limited to performance and benchmarking, failure tolerance problems and solutions, APIs for high performance features, parallel file systems, high bandwidth storage architectures, support for high velocity or complex data, metadata intensive workloads, autonomics for HPC storage, virtualization for storage systems, archival storage advances, resource management innovations, and incorporation of emerging storage technologies.
For recent PDSW programs, please see www.pdsw.org

CALL FOR PAPERS POSTER – download, print, and hang one up at your office / department!

  • Due: 9pm PDT, Saturday, August 30, 2014
  • Notification to authors: Tuesday, September 30, 2014
  • Camera-ready due: Friday, October 10, 2014
  • Slides due: Saturday, Nov. 15, 2014, 5:00 pm PDT, BEFORE the workshop – please
  • email them to Joan

The 5th International Workshop on Performance Modelling, Benchmarking and Simulation of High Performance Computer Systems (PMBS14)

Abstract: This workshop is concerned with the comparison of high-performance computing systems through performance modeling, benchmarking or through the use of tools such as simulators. We are particularly interested in research which reports the ability to measure tradeoffs in software/hardware co-design to improve sustained application performance. We are also keen to capture the assessment of future systems, for example through work that ensures continued application scalability through exa-scale systems.
The aim of this workshop is to bring together researchers, from industry and academia, concerned with the qualitative and quantitative evaluation and modeling of high-performance computing systems. Authors are invited to submit novel research in areas of performance modeling, benchmarking and simulation, we welcome research that brings together theory and practice. We recognize that the coverage of the term ‘performance’ has broadened to include power and reliability, and that performance modeling is practiced through analytical methods and approaches based on software tools.www.pmbsworkshop.org
  • –September 5th 2014 (23:59 PST) – Full Paper Submissions
  • –September 24th 2014 – Full Paper Notifications
  • –October 7th 2014 – Camera Ready Papers Due
  • –October 10th 2014 (23:59 PST) – Late Breaking and Short Paper Submissions
  • –October 20th 2014 – Late Breaking and Short Paper Notifications
  • –November 16th 2014 – PMBS14 Workshop
  • –November 16th – 21st 2014 – Supercomputing Conference Dates


IA^3 2014 – Fourth Workshop on Irregular Applications: Architectures and Algorithms

Abstract:  Many data intensive applications are naturally irregular. They may present irregular data structures, control flow or communication. Current supercomputing systems are organized around components optimized for data locality and regular computation. Developing irregular applications on current machines demands a substantial effort, and often leads to poor performance. However, solving these applications efficiently is a key requirement for next generation systems. The solutions needed to address these challenges can only come by considering the problem from all perspectives: from micro- to system-architectures, from compilers to languages, from libraries to runtimes, from algorithm design to data characteristics. Only collaborative efforts among researchers with different expertise, including end users, domain experts, and computer scientists, could lead to significant breakthroughs. This workshop aims at bringing together scientists with all these different backgrounds to discuss, define and design methods and technologies for efficiently supporting irregular applications on current and future architectures.
  • Abstract submission:  25 August 2014
  • Full or position paper submission: 1 September 2014
  • Notification of acceptance:  3 October 2014
  • Camera-ready papers: 10 October 2014
  • Workshop: 16 November 2014

2014 International Workshop on Data Intensive Scalable Computing Systems (DISCS-2014)

Abstract: Existing high performance computing (HPC) systems are designed primarily for workloads requiring high rates of computation. However, the widening performance gap between processors and storage, and trends toward higher data intensity in scientific and engineering applications, suggest there is a need to rethink HPC system architectures, programming models, runtime systems, and tools with a focus on data intensive computing. The 2014 International Workshop on Data Intensive Scalable Computing Systems (DISCS) builds on the momentum generated by its two predecessor workshops, providing a forum for researchers interested in HPC and data intensive computing to exchange ideas and discuss approaches for addressing Big Data challenges. The workshop includes a keynote address and presentation of peer-reviewed research papers, with ample opportunity for informal discussion throughout the day. http://discl.cs.ttu.edu/discs-2014/

Submission deadline: August 8, 2014 August 15, 2014

  • Author notification: September 19, 2014
  • Camera-ready version due for proceedings: October 10, 2014
  • Workshop date: November 16, 2014

5th SC Workshop on Big Data Analytics: Challenges and Opportunities

Abstract: Recent decade has witnessed data explosion, and petabyte sized data archives are not uncommon any more. It is estimated that organizations with high end computing (HEC) infrastructures and data centers are doubling the amount of data that they are archiving every year. On the other hand computing infrastructures are becoming more heterogeneous. The first four workshops held with SC 2010-2013 were a great success. Continuing on this success, we propose to broaden the topic of this workshop with an emphasis on novel middleware (e.g., in situ) infrastructures that facilitate efficient data analytics on big data. The proposed workshop intends to bring together researchers, developers, and practitioners from academia, government, and industry to discuss new and emerging trends in high end computing platforms, programming models, middleware and software services, and outline the data mining and knowledge discovery approaches that can efficiently exploit this modern computing infrastructure.http://web.ornl.gov/sci/knowledgediscovery/CloudComputing/BDAC-SC14/

  • Paper Submission: September 05, 2014
  • Acceptance Notice: October 15, 2014
  • Camera-Read Copy: November 01, 2014

5th SC Workshop on Big Data Analytics: Challenges and Opportunities

Abstract: Recent decade has witnessed data explosion, and petabyte sized data archives are not uncommon any more. It is estimated that organizations with high end computing (HEC) infrastructures and data centers are doubling the amount of data that they are archiving every year. On the other hand computing infrastructures are becoming more heterogeneous. The first four workshops held with SC 2010-2013 were a great success. Continuing on this success, we propose to broaden the topic of this workshop with an emphasis on novel middleware (e.g., in situ) infrastructures that facilitate efficient data analytics on big data. The proposed workshop intends to bring together researchers, developers, and practitioners from academia, government, and industry to discuss new and emerging trends in high end computing platforms, programming models, middleware and software services, and outline the data mining and knowledge discovery approaches that can efficiently exploit this modern computing infrastructure.http://web.ornl.gov/sci/knowledgediscovery/CloudComputing/BDAC-SC14/

  • Paper Submission: September 05, 2014
  • Acceptance Notice: October 15, 2014
  • Camera-Read Copy: November 01, 2014

The 9th Workshop on Workflows in Support of Large-Scale Science (WORKS14)

Abstract: Data Intensive Workflows (a.k.a. scientific workflows) are routinely used in most scientific disciplines today, especially in the context of parallel and distributed computing. Workflows provide a systematic way of describing the analysis and rely on workflow management systems to execute the complex analyses on a variety of distributed resources. This workshop focuses on the many facets of data-intensive workflow management systems, ranging from job execution to service management and the coordination of data, service and job dependencies. The workshop therefore covers a broad range of issues in the scientific workflow lifecycle that include: data intensive workflows representation and enactment; designing workflow composition interfaces; workflow mapping techniques that may optimize the execution of the workflow; workflow enactment engines that need to deal with failures in the application and execution environment; and a number of computer science problems related to scientific workflows such as semantic technologies, compiler methods, fault detection and tolerance. http://works.cs.cf.ac.uk/

  • Papers Due: August 1st 2014
  • Notifications of Acceptance: September 1st 2014
  • Final Papers Due: October 1st, 2014

Energy Efficient Supercomputing (E2SC)

Abstract: With Exascale systems on the horizon, we have ushered in an era with power and energy consumption as the primary concerns for scalable computing. To achieve a viable Exaflop high performance computing capability, revolutionary methods are required with a stronger integration among hardware features, system software and applications. Equally important are the capabilities for fine-grained spatial and temporal measurement and control to facilitate these layers for energy efficient computing across all layers. Current approaches for energy efficient computing rely heavily on power efficient hardware in isolation. However, it is pivotal for hardware to expose mechanisms for energy efficiency to optimize power and energy consumption for various workloads. At the same time, high fidelity measurement techniques, typically ignored in data-center level measurement, are of high importance for scalable and energy efficient inter-play in different layers of application, system software and hardware.

Ultravis ’14: The 9th Workshop on Ultrascale Visualization

Abstract: The output from leading-edge scientific simulations and experiments is so voluminous and complex that advanced visualization techniques are necessary to interpret the calculated results. Even though visualization technology has progressed significantly in recent years, we are barely capable of exploiting petascale data to its full extent, and exascale datasets are on the horizon. This workshop aims at addressing this pressing issue by fostering communication between visualization researchers and the users of visualization. Attendees will be introduced to the latest and greatest research innovations in large-scale data visualization, and also learn how these innovations impact scientific supercomputing and discovery.

7th Workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS) 2014

Abstract: The 7th workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS) will provide the scientific community a dedicated forum for presenting new research, development, and deployment efforts of large-scale many-task computing (MTC) applications on large scale clusters, clouds, grids, and supercomputers. MTC, the theme of the workshop encompasses loosely coupled applications, which are generally composed of many-tasks to achieve some larger application goal. This workshop will cover challenges that can hamper efficiency and utilization in running applications on large-scale systems, such as local resource manager scalability and granularity, efficient utilization of raw hardware, parallel file-system contention and scalability, data management, I/O management, reliability at scale, and application scalability. We welcome paper submissions in theoretical, simulations, and systems topics with special consideration to papers addressing the intersection of petascale/exascale challenges with large-scale cloud computing. We invite the submission of original research work of 6 pages. For more information, see: http://datasys.cs.iit.edu/events/MTAGS14/.

 

  • Call for Papers: ACM MTAGS 2014 — abstracts due August 18th, 2014

7th Workshop on High Performance Computational Finance

Abstract: The purpose of this workshop is to bring together practitioners, researchers, vendors, and scholars from the complementary fields of computational finance and high performance computing, in order to promote an exchange of ideas, develop common benchmarks and methodologies, discuss future collaborations and develop new research directions. Financial companies increasingly rely on high performance computers to analyze high volumes of financial data, automatically execute trades, and manage risk.
Recent years have seen the dramatic increase in compute capabilities across a variety of parallel systems. The systems have also become more complex with trends towards heterogeneous systems consisting of general-purpose cores and acceleration devices. The workshop will enable the dissemination of recent advances and findings in the application of high performance computing to computational finance among researchers, scholars, vendors and practitioners, and will encourage and highlight collaborations between these groups in addressing high performance computing research challenges.http://ewh.ieee.org/conf/whpcf/
  • Revised submission deadline: August 22nd, 11:59 EST
  • Author notification: September 19th
  • Final version due: October 3rd

Workshop on Education for High Performance Computing (EduHPC)

Abstract: Parallel and Distributed Computing (PDC), especially its aspects pertaining to High Performance Computing (HPC), now permeates most computing activities. Certainly, it is no longer sufficient for even basic programmers to acquire only the traditional sequential programming skills. This workshop on state of art in high performance, parallel, and distributed computing education will comprise contributed as well as invited papers from academia, industry, and other educational and research institutes on topics pertaining to the teaching of PDC and HPC topics in the Computer Science and Engineering, Computational Science, and Domain Science and Engineering curriculum. The emphasis of the workshop will be on the undergraduate education, although graduate education issues are also within scope, and target audience will include attendees among SC-14 Educators, academia, and industry. This effort is in coordination with NSF/TCPP curriculum initiative on parallel and distributed computing (http://www.cs.gsu.edu/~tcpp/curriculum/index.php). This workshop was the first education-related regular workshop held at SC13.http://www.cs.gsu.edu/~tcpp/curriculum/?q=edupdhpc

  • Aug 27, 2014: Paper submission deadline
  • Sept 26, 2014: Author notification
  • Oct 10, 2014:  Camera-ready paper deadline

Integrating Computational Science Into the Curriculum: Models and Challenges

Abstract: Computational science has become the third path to discovery in science and engineering along with theory and experimentation. It is central to advancing research and is essential in making U.S. industry competitive in the face of international market competition. Yet, many universities have not integrated computational science into their curricula. This workshop will focuses on the challenges of curriculum reform for computational science and the examples, alternative approaches, and existing and emerging resources that can be used to facilitate those changes. Participants will review competencies and models of computational science programs, learn about the demand for a workforce with computational modeling skills, and available resources and opportunities. Participants will analyze their current curriculum and expertise and devise a draft plan of action to advance computational science on their own campuses.https://www.osc.edu/~sgordon/sc14

Second Workshop on Sustainable Software for Science: Practice and Experiences
(WSSSPE 2)

Abstract: Progress in scientific research is dependent on the quality and accessibility of software at all levels and it is critical to address many new challenges related to the development, deployment, and maintenance of reusable software. In addition, it is essential that scientists, researchers, and students are able to learn and adopt a new set of software-related skills and methodologies. Established researchers are already acquiring some of these skills, and in particular a specialized class of software developers is emerging in academic environments as an integral and embedded part of successful research teams. Following a first workshop at SC13, WSSSPE2 will use reviewed short papers, keynotes speakers, breakouts and panels to provide a forum for discussion of the challenges, including both positions and experiences. All material and discussions will be archived for continued discussion. The workshop is anticipated to lead to a special issue of the Journal of Open Research Software.
http://wssspe.researchcomputing.org.uk/wssspe2/

5th Annual Energy Efficient HPC Working Group Workshop

Abstract: This annual workshop is organized by the Energy Efficient HPC Working Group (http://eehpcwg.lbl.gov/). It provides a strong blended focus that includes both the facilities and system perspectives; from architecture through design and implementation. The topics reflect the activities and interests of the EE HPC WG, which is a group with over 400 members from ~20 different countries. Speakers from SC13 included Chris Malone, Google, Dan Reed, University of Iowa and Jack Dongarra, University of Tennessee. There were also panel sessions covering all of the EE HPC WG Team activities. Panel topics from SC13 included lessons learned from commissioning liquid cooling building infrastructure, a methodology for improved quality power measurements for benchmarking and re-thinking the PUE metric. Dynamic speakers and interesting panel sessions characterized the SC13 workshop and can be expected for the SC14 workshop as well.http://eehpcwg.lbl.gov/sc14/workshops

th Workshop on Python for High Performance and Scientific Computing (PyHPC)

Abstract: Python is an established, general-purpose, high-level programming language with a large following in research and industry for applications in fields including computational fluid dynamics, finance, biomolecular simulation, artificial intelligence, statistics, data analysis, scientific visualization, and systems management. The use of Python in scientific, high performance parallel, big data, and distributed computing roles has been on the rise with the community providing new and innovative solutions while preserving Python’s famously clean syntax, low learning curve, portability, and ease of use.
The workshop will bring together researchers and practitioners from industry, academia, and the wider community using Python in all aspects of high performance and scientific computing. The goal is to present Python applications from mathematics, science, and engineering, to discuss general topics regarding the use of Python, and to share experience using Python in scientific computing education.
For more information, see http://www.dlr.de/sc/pyhpc2014.

Workshop on Accelerator Programming using Directives (WACCPD)

Abstract: The nodes of many current HPC platforms are equipped with hardware accelerators that offer high performance with power benefits. In order to enable their use in scientific application codes without undue loss of programmer productivity, several recent efforts have been devoted to providing directive-based programming interfaces. These APIs promise application portability and a means to avoid low-level accelerator-specific programming. Many application developers prefer incremental ways to port codes to accelerator using directives without adding more complexity to their code. This workshop explores the use of these directive sets their implementations and experiences with their deployment in HPC applications. The workshop aims at bringing together the user and tools community to share their knowledge and experiences of using directives to program accelerators. http://openacc.org/waccpd14

  • Submission deadline: August 22nd, 2014 (Midnight 12:00 Pacific TimeZone)
  • Author notification: September 22th, 2014

5th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA)

Abstract: Novel scalable scientific algorithms are needed to enable key science applications to exploit the computational power of large-scale systems. This is especially true for the current tier of leading petascale machines and the road to exascale computing as HPC systems continue to scale up in compute node and processor core count. These extreme-scale systems require novel scientific algorithms to hide network and memory latency, have very high computation/communication overlap, have minimal communication, and have no synchronization points. Scientific algorithms for multi-petaflop and exa-flop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. With the advent of heterogeneous compute nodes that employ standard processors and GPGPUs, scientific algorithms need to match these architectures to extract the most performance. Key science applications require novel mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.http://www.csm.ornl.gov/srt/conferences/Scala/2014/index.html

  • Full paper submission: 1 September, 2014
  • Notification of acceptance: 20 September, 2014
  • Final paper submission (firm): 1 October, 2014

Workshop on Domain-Specific Languages and High-Level Frameworks for High-Performance Computing

Abstract: Multi-level heterogeneous parallelism and deep memory hierarchies in current and emerging computer systems makes their programming very difficult. Domain-specific languages (DSLs) and high-level frameworks (HLFs) provide convenient abstractions, shielding application developers from much of the complexity of explicit parallel programming in standard programming languages like C/C++/Fortran. However, achieving scalability and performance portability with DSLs and HLFs is a significant challenge. For example, very few high-level frameworks can make effective use of accelerators such as GPUs and FPGAs. This workshop seeks to bring together developers and users of DSLs and HLFs to identify challenges and discuss solution approaches for their effective implementation and use on massively parallel systems.http://hpc.pnl.gov/conf/wolfhpc/2014/

  • Submission deadline : 15 Aug 2014
  • Author notification : 20 Sep 2014
  • Final papers due : 10 Oct 2014
  • WOLFHPC workshop : 17 Nov 2014

ExaMPI14 – Exascale MPI 2014

Abstract: The MPI design and its main implementations have proved surprisingly scalable. For this and many other reasons MPI is currently the de-facto standard for HPC systems and applications. However, there is a need for re-examination of the Message Passing (MP) model and for exploring new innovative and potentially disruptive concepts and algorithms, possibly to investigate other approaches than those taken by the MPI 3.0 standard.
The aim of workshop is to bring together researchers and developers to present and discuss innovative algorithms and concepts in the MP programming model and to create a forum for open and potentially controversial discussions on the future of MPI in the exascale era.
Possible workshop topics include innovative algorithms for collective operations, extensions to MPI, including data centric models such as active messages, scheduling/routing to avoid network congestion, “fault-tolerant” communication, interoperability of MP and PGAS models, and integration of task-parallel models in MPI. https://www.pdc.kth.se/exampi14
  •  Paper submission: September 5, 2014
  •  Acceptance notification: September 22, 2014
  •  Final papers due: October 6, 2014

 

Hardware-Software Co-Design for High Performance Computing (Co-HPC)

Abstract: Hardware-software co-design involves the concurrent design of hardware and software components of complex computer systems, whereby application requirements influence architecture design and hardware constraints influence design of algorithms and software. Concurrent design of hardware and software has been used for the past two decades for embedded systems in automobiles, avionics, mobile devices, and other such products, to optimize for design constraints such as performance, power, and cost. HPC is facing a similar challenge as we move towards the exascale era, with the necessity of designing systems that run large-scale simulations with high performance while meeting cost and energy consumption constraints. This workshop will invite participation from researchers who are investigating the interrelationships between algorithms/applications, systems software, and hardware, and who are developing methodologies and tools for hardware-software co-design for HPC.

The LLVM Compiler Infrastructure in HPC

Abstract: LLVM, winner of the 2012 ACM Software System Award, has become an integral part of the software-development ecosystem for optimizing compilers, dynamic-language execution engines, source-code analysis and transformation tools, debuggers and linkers, and a whole host of programming-language and toolchain-related components. Now heavily used in both academia and industry, where it allows for rapid development of production-quality tools, LLVM is increasingly used in work targeted at high-performance computing. Research in and implementation of programming-language analysis, compilation, execution and profiling has clearly benefited from the availability of a high-quality, freely-available infrastructure on which to build. This workshop will focus on recent developments, from both academia and industry, that build on LLVM to advance the state of the art in high-performance computing.
http://llvm-hpc-workshop.github.io/

  • Paper submissions due: September 1, 2014
  • Notification to authors of acceptance: October 1, 2014
  • Camera-ready papers due: November 1, 2014
  • Workshop takes place: November 17, 2014

High Performance Technical Computing in Dynamic Languages

Abstract: Dynamic high–level languages such as Julia, Maple®, Mathematica®, MATLAB®, Octave, Python, R, and Scilab are rapidly gaining popularity with computational scientists and engineers, who often find these languages more productive for rapid prototyping of numerical simulation codes. However, writing legible yet performant code in dynamic languages remains challenging, which limits the scalability of code written in such languages, particularly when deployed on massively parallel architectures such as clusters, cloud servers, and supercomputers. This workshop aims to bring together users, developers, and practitioners of dynamic technical computing languages, regardless of language, affiliation or discipline, to discuss topics of common interest. Examples of such topics include performance, software development, abstractions, composability and reusability, best practices for software engineering, and applications in the context of visualization, information retrieval and big data analytics. http://jiahao.github.io/hptcdl-sc14/

  • August 18, 2014: Paper submission deadline. Submit via EasyChair
  • September 15, 2014: Notification to authors of acceptance
  • October 15, 2014: E-copyright forms and camera-ready papers due
  • November 17, 2014: Work

VISTech Workshop 2014: Visualization Infrastructure & Systems Technology

Abstract: Human perception is centered on the ability to process information contained in visible light, and our visual interface is a tremendously powerful data processor. Every day we are inundated with staggering amounts of digital data. For many types of computational research, the field of visualization is the only viable means of extracting information and developing understanding from this data. Integrating our visual capacity with technological capabilities has tremendous potential for transformational science. We seek to explore the intersection between human perception and large-scale visual analysis through the study of visualization interfaces and interactive displays. This rich intersection includes: virtual reality systems, visualization through augmented reality, large scale visualization systems, novel visualization interfaces, high-resolution interfaces, mobile displays, and visualization display middleware. The VISTech workshop will provide a space for experts in the large-scale visualization technology field and users to come together to discuss state-of-the art technologies for visualization and visualization laboratories.

ATIP Workshop on Japanese Research Toward Next-Generation Extreme Computing

Abstract: The Asian Technology Information Program (ATIP) proposes to hold a workshop at SC14 (New Orleans LA) titled “Japanese Research Toward Next-Generation Extreme Computing.” This workshop will include a significant set of presentations, posters, and panel discussions by Japanese researchers from universities, government laboratories, and industry. Participants will address topics including national exascale plans as well as the most significant hardware and software research. A key aspect of the proposed workshop will be the unique opportunity for members of the US research community to interact and have direct discussions with the top Japanese scientists who are participating. SC is the ideal venue for this workshop, because after the US, more SC participants come from Japan than from any other country. There are a multitude of exhibitor booths, research papers, and panels, etc. with Japanese content and Japanese researchers frequently win awards for best performance, greenest system, fastest networks, etc.

xtreme-Scale Programming Tools

Abstract: Approaching exascale, architectural complexity and severe resource limitations with respect to power, memory and I/O make tools support in debugging and performance optimization more critical then ever before. However, the challenges mentioned above also apply to tools development and, in particular, raise the importance of topics such as automatic tuning and methodologies for exascale tools-aided application development. This workshop will serve as a forum for application, system, and tool developers to discuss the requirements for future exascale-enabled tools and the roadblocks that need to be addressed on the way. We also highly encourage application developers to share their experiences with using tools.
The workshop is third in a series at SC conferences organized by the Virtual Institute – High Productivity Supercomputing (VI-HPS), an international initiative of HPC programming-tool builders. The event will also focus on the community-building process necessary to create an integrated tools-suite ready for an exascale software stack. http://www.vi-hps.org/symposia/other/espt-sc14.html
  • Monday 4 Aug. 2014: abstract submissions due (extended deadline)
  • Monday 1 Sept. 2014: notification of acceptance (at latest)
  • Monday 17 Nov 2014: ESPT workshop at SC14

NDM’14: Fourth International Workshop on Network-aware Data Management

Abstract: Data sharing and resource coordination among distributed teams are becoming significant challenges every passing year. Networking is one of the most crucial components in the overall system architecture of a data centric environment. Many of the current solutions both in industry and scientific domains depend on the underlying network infrastructure and its performance. There is a need for efficient use of the networking middleware to address increasing data and compute requirements. Main scope of this workshop is to promote new collaborations between data management and networking communities to evaluate emerging trends and current technological developments, and to discuss future design principles of network-aware data management. We will seek contribution from academia, government, and industry to address current research and development efforts in remote data access mechanisms, end-to-end resource coordination, network virtualization, analysis and management frameworks, practical experiences, data-center networking, and performance problems in high-bandwidth networks.

Visual Performance Analytics – VPA

Abstract: Over the last decades an incredible amount of resources has been devoted to building ever more powerful supercomputers. However, exploiting the full capabilities of these machines is becoming exponentially more difficult with each generation of hardware. To help understand and optimize the behavior of massively parallel simulations the performance analysis community has created a wide range of tools to collect performance data, such as flop counts or network traffic at the largest scale. However, this success has created new challenges, as the resulting data is too large and too complex to be analyzed easily. Therefore, new automatic analysis and visualization approaches must be developed to allow application developers to intuitively understand the multiple, interdependent effects that their algorithmic choices have on the final performance. This workshop intends to bring together researchers from performance analysis and visualization to discuss new approaches of combining both areas to analyze and optimize large-scale applications.http://cedmav.org/vpa2014

  • August 8th: extended submission deadline for full and short papers
  • September 15th: notification of acceptance
  • October 6th: final paper and copyrights due

The Second International Workshop on Software Engineering for High Performance Computing in Computational Science & Engineering (SE-HPCCSE 2014)

Abstract: Researchers are increasingly using high performance computing (HPC), including GPGPUs and computing clusters, for computational science & engineering (CSE) applications. Unfortunately, when developing HPC software, developers must solve reliability, availability, and maintainability problems in extreme scales, understand domain specific constraints, deal with uncertainties inherent in scientific exploration, and develop algorithms that use computing resources efficiently. Software engineering (SE) researchers have developed tools and practices to support development tasks, including: validation & verification, design, requirements management and maintenance. HPC CSE software requires appropriately tailored SE tools/methods. The SE-HPCCSE workshop addresses this need by bringing together members of the SE and HPC CSE communities to share perspectives, present findings from research and practice, and generating an agenda to improve tools and practices for developing HPC CSE software. In the 2013 edition of this workshop, the discussion focused around a number of interesting topics, including: bit-by-bit vs. scientific validation and reproducibility.
http://sehpccse14.cs.ua.edu/

  • Submission Deadline: August 23, 2014
  • Author Notification: September 15, 2014
  • Workshop Date: November 21, 2014
  • Final Manuscript Due for proceedings: TBD

9th Gateway Computing Environments Workshop (GCE14)

Abstract: Science today is increasingly digital and collaborative. The impact of high-end computing has exploded as new communities accelerate their research through science gateways such as CIPRES and iPlant. Currently 40% of the NSF XSEDE program’s users come through science gateways. As datasets increase in size, communities increasingly use gateways for remote analysis. Software has scalable broader impact when researchers set up web interfaces to up-to-date codes running on high-end resources. Gateways increasingly connect varied elements of cyberinfrastructure – instruments, streaming sensor data, data stores and computing resources of all types. Online collaborative tools allow the sharing of both source data and subsequent analyses, speeding discovery. The important work of gateway development, however is often done in an isolated, hobbyist environment. Leveraging knowledge about common tasks frees developers to focus on higher-level, grand-challenge functionality in their discipline. This workshop will feature case studies and an opportunity to share common experiences. http://sciencegateways.org/upcoming-events/gce14

  • Submissions due Wednesday, August 27
  • Acceptance notification Monday, September 22
  • Final acceptance of submissions (all reviewer comments resolved) Wednesday, October 8
  • Camera-ready papers due (deadline set by SC14) Wednesday, October 15
  • Workshop held, posters due (presenters should bring them to the workshop) Friday, November 21

Innovating the Network for Data Intensive Science

Abstract: Every year SCInet develops and implements the network for the SC conference. This network is state of the art, connects many demonstrators of big data processing infrastructures at the highest linespeeds and newest technologies available and demonstrates the newest functionality. The showfloor network connects to many laboratories worldwide using Lambda connections and NREN networks. This workshop brings together the network researchers and innovators to bring up challenges and novel ideas that stretch SCInet even further. We invite papers that propose and discuss new and novel techniques regarding capacity and functionality of networks, its control and its architecture to be demonstrated at SC14. https://scinet.supercomputing.org/workshop/

Workshop on Best Practices for HPC Training

Abstract: HPC facilities face the challenge of serving a diverse user base with different skill levels and needs. Some users run precompiled applications, while others develop complex, highly optimized codes. Therefore, HPC training must include a variety of topics at different levels to cater to a range of skillsets. As centers worldwide install increasingly heterogeneous architectures, training will be even more important and in greater demand. A good training program can have many benefits: less time spent on rudimentary assistance, efficient utilization of resources, increased staff-user interaction, and training of the next generation of users. Unfortunately, the most successful training strategies at HPC facilities are not documented. This workshop aims to expose best practices for delivering HPC training. Topics will include: methods of delivery, development of curricula, optimizing duration, surveys and evaluations, metrics and determining success. Lastly, the workshop aims to develop collaborative connections between participating HPC centers.http://hpctraining.github.io/SC14workshop/

  • Deadline for abstract submission: August 31st, 2014

1st International Workshop on HPC User Support Tools (HUST-14)

Abstract: Researchers pushing the boundaries of science and technology are an existential reason for supercomputing centres. In order to be productive, they heavily depend on HPC support teams, who in turn often struggle to adequately support the researchers.
Nevertheless, recent surveys have pointed out that there is an abundant lack of collaboration between HPC support teams all around the world, even though they are frequently facing very similar problems with respect to providing end users with the tools and services they require.
With this workshop we aim to bring together all parties involved, i.e. system administrators, user support team members, tool developers, policy makers and end users, to discuss these issues and bring forward the solutions they have come up with. As such, we want to provide a platform to present tools, share best practices and exchange ideas that help streamline HPC user support. https://ugent.be/hpc/hust14.html
  • Call for papers: Wednesday May 14th 2014
  • Submission open: Sunday June 1st 2014
  • Workshop papers due: Sunday August 10th 2014 (23:59 AOE)
  • Notification of acceptance: Wednesday October 1st 2014
  • Camera-ready papers due: Monday October 13th 2014
  • Workshop date: Friday November 21st 2014

Advancing On-line HPC Learning

Abstract: The goal of this workshop is to identify the challenges and opportunities for developing and delivering the infrastructure, content, teaching methods, certification and recognition mechanisms for providing high quality on-line programs. Another key goal is to produce a publicly available reporting that will documents the lessons learned and recommendations for advancing the development and delivery of high quality on-line programs.

The 5th International Workshop on Data-Intensive Computing in the Clouds

Abstract: Applications and experiments in all areas of science are becoming increasingly complex. Some applications generate data volumes reaching hundreds of terabytes and even petabytes. As scientific applications become more data intensive, the management of data resources and dataflow between the storage and compute resources is becoming the main bottleneck. Analyzing, visualizing, and disseminating these large data sets has become a major challenge and data intensive computing is now considered as the “fourth paradigm” in scientific discovery after theoretical, experimental, and computational science.
The 5th international workshop on Data-intensive Computing in the Clouds (DataCloud 2014) will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running data-intensive computing workloads on Cloud Computing infrastructures. The workshop will focus on the use of cloud-based technologies to meet the new data intensive scientific challenges that are not well served by the current supercomputers, grids or compute-intensive clouds.http://datasys.cs.iit.edu/events/DataCloud2014/
  • Paper submission: September 1st, 2014
  • Acceptance notification: October 1st, 2014
  • Final papers due: October 10th, 2014

Women in HPC

Abstract: Gender inequality is a problem across all scientific disciplines. Women are more likely to successfully complete tertiary education than men but less likely to become scientists. The multi-disciplinary background of HPC scientists should facilitate broad female engagement but fewer than 10% of participants were female at two recent HPC conferences. Strong gender stereotyping of science negatively impacts female uptake and achievement in scientific education and employment. Removing gender stereotyping engages more women and gender-balanced groups have greater collective intelligence, which should benefit the HPC community.
This workshop will begin to address gender inequality in HPC by encouraging on-going participation of female researchers and providing an opportunity for female early career researchers to showcase their work and network with role-models and peers in an environment that reduces the male gender stereotype. Invited talks from leading female researchers will discuss their careers and a panel session will create a gender-equality action plan.

 

Share this:

  • Twitter

Filed Under: Featured news, News

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • PyFR: A GPU-Accelerated Next-Generation Computational Fluid Dynamics Python Framework
  • Rob Farber
  • Latest Intel SDE Emulates New ISA Instructions For Knights Landing
  • NASA Charts Path For CFD To 2030 - Projects Future Computer Technology!
  • Intel Broadwell Compute Gen8 GPU Architecture

Archives

© 2025 · techenablement.com