Parallel, Distributed, and Network-Based Processing has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline. These changes are often a result of the cross-fertilization of parallel and distributed technologies with other rapidly evolving technologies. Therefore, reviewing and assessing these new developments is paramount compared with recent research achievements in the well-established areas of parallel and distributed computing from industry and the scientific community. PDP 2023 will provide a forum for presenting these and other issues through original research presentations and facilitate the exchange of knowledge and new ideas at the highest technical level. This year's edition is part of the dissemination activities of the ADMIRE project, wich will also be an exhibitor at the conference Demo Area together with E4 Computer Engineering.
Conference Starts in:
The ADMIRE project aims to avoid congestion and balance computational with storage performance when processing extremely large data sets. The main objective of the ADMIRE project is to establish this control by creating an active I/O stack that dynamically adjusts computation and storage requirements through intelligent global coordination, the malleability of computation and I/O, and the scheduling of storage resources along all levels of the storage hierarchy. Find out more
Submit your papers to EasyChair.
Parallel Computing: massively parallel machines; embedded parallel and distributed systems; multi- and many-core systems; GPU and FPGA-based parallel systems; parallel I/O; memory organization.
Distributed and Network-based Computing: Cluster, Grid, Web, and Cloud computing; mobile computing; interconnection networks.
Big Data: large-scale data processing; distributed databases and archives; large-scale data management; metadata; data-intensive applications.
Programming models and Tools: programming languages and environments; runtime support systems; performance prediction and analysis; simulation of parallel and distributed systems.
Systems and Architectures: novel system architectures; high data throughput architectures; service-oriented architectures; heterogeneous systems; shared-memory and message-passing systems; middleware and distributed operating systems; dependability and survivability; resource management.
Advanced Algorithms and Applications: distributed algorithms; multi-disciplinary applications; computations over irregular domains; numerical applications with multi-level parallelism; real-time distributed applications.
The Call for Papers is available here.
Full paper: Maximum of 8 pages; additional pages are charged with 60 Eur. per extra page (up to 2 additional pages)
Short paper: Maximum of 4 pages; an additional page can be included with a charge of 60 Eur.
The registration procedure is managed by the Euromicro web portal.
PDP 2023 full schedule of keynotes, sessions & workshops will be available soon
Euromicro
University Carlos III of Madrid
University of Naples “Federico II”
University of Naples “Federico II”
Universidad Complutense de Madrid
University of Naples “Federico II”
IMATI-CNR
University of Calabria
University of Naples Federico II
University of Sassari
University of Naples “Federico II”
University of Naples “Parthenope”
University of Messina
Universidad Carlos III de Madrid
University of Naples “Parthenope”
Well known industry leaders and emerging talents
The following Special Sessions will be part of PDP 2023
HPCMS intent is to offer an opportunity to express and confront views on trends, challenges, and state-of-the art in diverse application fields, such as engineering, physics, chemistry, biology, geology, medicine, ecology, sociology, traffic control, economy, etc.
Topics of interest include, but are not limited to:
As for previous editions, organizers of the HPCMS session are planning a Special Issue of an important international ISI Journal, based on distinguished papers that will be accepted for the session. For instance, a selected number of papers of the past workshop editions have been published on the ISI Journals “Journal of Parallel and Distributed Computing”, “International Journal of High Performance Computing Applications” and “Concurrency and Computation: Practice and Experience”
Chairs:
Program Committee
Heterogeneity is emerging as one of the main characteristics of today’s and future HPC environments where different node organizations, memory hierarchies, and kinds of exotic accelerators are increasingly present. It pervades the entire spectrum of Computing Continuum, ranging from large Cloud infrastructures and Datacenter up to the Internet of Things and Edge Computing environments, aimed at making available in a transparent and friendly way the multitude of low-power and heterogeneous HPC resources available everywhere around us. In this context, for Computational Science and Machine Learning, it is essential to leverage efficient and highly scalable libraries and tools capable of exploiting such modern heterogeneous computers. These systems are typically characterized by very different software environments, which require a new level of flexibility in the algorithms and methods used to achieve an adequate level of performance, with growing attention to energy consumption. This conference Special Session aims to provide a forum for researchers and practitioners to discuss recent advances in parallel methods and algorithms and their implementations on current and future heterogeneous HPC architectures. We solicit research works that address algorithmic design, implementation techniques, performance analysis, integration of parallel numerical methods in science and engineering applications, energy-aware techniques, and theoretical models that efficiently solve problems on heterogeneous platforms.
We focus on papers covering various topics of interest that include, but are not limited to, the following:
Cloud Computing covers a broad range of distributed computing principles from infrastructure (e.g distributed storage, reconfigurable networks) to new programming platforms (e.g MS Azure, Google Appe Engine), and internet-based applications. Particularly, Infrastructure as a Service (IaaS) Cloud systems allow the dynamic creation, destruction and management of virtual machines (VMs) as part of virtual computing infrastructures. IaaS Clouds provide a high-level of abstraction to the end user, one that allows the creation of on-demand services through a pay as you go infrastructure combined with elasticity. The increasingly large range of choices and availability of IaaS toolkits has also allowed creation of cloud solutions and frameworks even suitable for private deployment and practical IaaS use on smaller scales.
This special session on Cloud Computing is intended to be a forum for the exchange of ideas and experiences on the use of Cloud Computing technologies and applications with compute and data intensive workloads. The special session also aims at presenting the challenges and opportunities offered by the development of open-source Cloud Computing solutions, as well as case studies in applications of Cloud Computing.
Authors are invited to submit original and unpublished research in the areas of Cloud Computing, Fog/Edge, Serverless and Distributed Computing. With the rapid evolution of newly emerging technologies, this session also aims to provide a forum for novel methods and case studies on the integrated use of clouds, fogs, Internet of Things (IoT) and Blockchain systems. The general venue will be a good occasion to share, learn, and discuss the latest results in these research fields. The special session program will include presentations of peer-reviewed papers.
Topics of interest include, but are not limited to:
Recently, we are witnessing the growth of Internet-connected devices processing at an incredible pace. Devices that need to be “always-on” for accessing data and services through the network. This massive set of devices generates a lot of pressure on the computing infrastructure that is called to serve their requests. This is particularly critical when focusing on the so-called next-generation applications (NextGen), i.e., those applications characterized by stringent requirements in terms of latency, data, privacy, and network bandwidth. Such a “pressure” stimulates the evolution of classical Cloud computing platforms towards a large-scale distributed computing infrastructure of heterogeneous devices, forming a continuum from the Cloud to the Edge of the network.
This complex environment is determining a paradigm switch in the organization of computing infrastructures, moving from “mostly-centralized” to “mostly-decentralized” installments. Rather than relying on a traditional data center compute model, the notion of a compute continuum is gaining momentum, exploiting the right computational resources at optimal processing points in the system.
In the traditional cloud model, enterprise data is directed straight to the cloud for processing, where most of the heavy compute intelligence is located. But, in the transformative data-driven era we live in, this is increasingly not a viable long-term economic model due to the volume of data and a new emphasis on security, safety, privacy, latency, and reliability.
Today, data insights drive near real-time decisions directly affecting the operation of factories, cities, transportation, buildings, and homes. To cope, computing must be fast, efficient, and secure, which generally means putting more compute firepower closer to the data source. It builds the case for more on-device endpoint computing, more localized computing with a new breed of network and private edge servers, and sensible choices over which workloads need to remain in cloud data centers.
The cradle of this special session has been the focus group on the compute continuum that is part of the Italian National Laboratory on “High-Performance Computing: Key Technologies and Tools”, from which this initiative stems. Starting there, the special session aims to bring together experts from academia and industry to identify new challenges for the management of resources in cloud-edge infrastructures and promote this vision to academia and industry stakeholders.
Topics of interest include, but are not limited to:
Abstract
The global information technology ecosystem is currently in transition to a new generation of applications, which require intensive systems of acquisition, processing, and data storage, both at the sensor and the computer level. The new scientific applications, more complex, and the increasing availability of data generated by high resolution scientific instruments in domains as diverse as climate, energy, biomedicine, etc., require the synergies between high performance computing (HPC) and large-scale data analysis (Big Data). Today, the HPC world demands Big Data world techniques, while intensive data analysis requires HPC solutions. However, the tools and cultures of HPC and Big Data have diverged because HPC has traditionally focused on strongly coupled intensive computing problems, while Big Data has been geared towards data analysis in highly scalable applications.
The overall goal of this workshop is to create a scientific discussion forum to exchange techniques and experiences to improve the integration of the HPC and Big Data paradigms, providing a convenient way to create software and adapt existing hardware and software intensive in computing and data on an HPC platform. Thus, this workshop aims at bringing together developers of IoT/edge/Fog/HPC applications with researchers in the field of distributed IT systems. It addresses researchers who are already employing distributed infrastructure techniques in IoT applications, as well as computer scientists working on the field of distributed systems interested in bringing new developments into the Big Data convergence area.
The workshop will provide the opportunity to assess technology roadmaps to support IoT data collection, Data Analytics, and HPC at scale, and to share users’ experiences.
A sample of the interest of our proposal is the existence in Europe of a working group for the convergence between HPC and Big Data supported by ETP4HPC and BDVA, led by Prof. María S. Pérez and with the cooperation of several research groups in this proposal. In addition, Prof. Jesús Carretero collaborates in the preparation of the strategic research agenda of the European platform ETP4HPC in the line of data-intensive applications and Dr. Rafael Mayo-García coordinates the European Energy Research Alliance (EERA) transversal Joint Programme ‘Digitalisation for energy’ where convergence research on HPC and Data Science is developed.
Target audience – why and to whom the workshop is of interest
The workshop addresses an audience with two profiles.
On the one hand, it attracts researchers who are already employing distributed infrastructure techniques to implement IoT/edge/Fog/Cloud/HPC solutions, in particular scientists who are developing data- and compute-intensive Big Data applications that include IoT data, large-scale IoT networks, and deployments, or complex analysis and machine learning pipelines to exploit the data. On the other hand, it attracts computer scientists working in the field of distributed systems interested in bringing new developments into the convergence of Big Data and HPC solutions.
Topics of interests
Contributions are expected but not restricted to the following topics:
Organization
Workshop Chairs
Katzalin Olcoz (Universidad Complutense de Madrid), katzalin@ucm.es
Katzalin Olcoz is an Associate Professor in the Department of Computer Architecture and System Engineering of the Complutense University of Madrid (Spain) since 2000. Within the computer architecture group of the Complutense University, she has been involved in several projects in the field of computer architecture and design automation from high-level specifications, since 1992. Her current research interests focus on high performance computing, heterogeneous computing, energy efficiency and virtualization. She is Associate Editor of IEEE Trans. on CAD and IEEE Trans. on Emerging Topics in Computing. She has been in the Program Committee of several international conferences such as ICS, PDP, ICCAD, VLSID and ISLPED.
Jesus Carretero (University Carlos III of Madrid), jesus.carretero@uc3m.es
Jesus Carretero is a Full Professor of Computer Architecture and Technology at Universidad Carlos III de Madrid (Spain), since 2002. His research activity is centered on high-performance computing systems, large-scale distributed systems and real-time systems applied to data management with application to biomedicine, image processing and COVID-19 pandemic simulation. He is currently involved in coordinating the EuroHPOC project ADMIRE. He was Action Chair of the IC1305 COST Action “Network for Sustainable Ultrascale Computing Systems (NESUS)”. He organized CCGRID 2017 in Madrid and has been General chair of HPCC 2011, MUE 2012, ISPA 2016. Currently he is applications track vice-chair in Supercomputing conference. Prof. Carretero is a senior member of the IEEE Computer Society.
Program Committee
Workshop format
We aim at a half-day workshop.
We plan a combination of oral presentations, short talks about related topics from the main PDP conference, and a closing panel discussion.
We plan to host about 6 talks oral presentations, 20 minutes per talk, and 10 minutes for questions and discussion. Talks selection will be based on the interest of the talk and the relation with the workshop.
In addition to the selected talks, the workshop will also feature a keynote and invited short talks from the main PDP track with the goal of extending the scope of our workshop.
A final panel discussion will summarize the workshop and propose joint next steps to progress in Big Data–HPC convergence research in Supercomputers and large-scale distributed IT systems.
Attendance estimated is between 10 and 20 participants.
Publicity Plan
The website will be hosted by the CABAHLA-CM project website (cabahla.org).
The workshop will be advertised in an open call within the diverse networks of the workshop chairs and program committee, including local and international community networks. Advertisement includes announcements on the community’s, institution’s, personal websites, and email lists.
Background
This workshop proposal is a result of the work made at the CABAHLA-CM project (cabahla.org), a successful project that brings together four research groups, with vast experience in HPC and data-intensive systems, which has a strong national and international presence.
This Project has been funded by the Comunidad de Madrid (Madrid Regional government) under the grant (S2018/TCS4423).
The current static usage model of HPC systems is becoming increasingly inefficient. This is driven by the continuously growing complexity and heterogeneity of system architectures, in combination with the increased usage of coupled applications, the need for strong scaling with extreme scale parallelism, and the increasing reliance on complex and dynamic workflows. As a consequence, we see a rise in research on malleable systems, middleware software and applications, which can adjust resource usage dynamically in order to extract a maximum of efficiency.
Malleability allows systems to dynamically adjust the computation and storage needs of applications, on the one side, and the global system on the other. Such malleable systems, however, face a series of fundamental research challenges, including: who initiates changes in resource availability or usage? How is it communicated? How to compute the optimal usage? How can applications cope with dynamically changing resources? What should malleable programming models and abstractions look like? How to design resource management frameworks for malleable systems? What should be the API for applications?
This tutorial will provide an in-depth presentation of emerging software designs to achieve malleability in high-performance computing, high-level parallel programming models, and programmability techniques to improve applications’ malleability. The main part of the tutorial will be devoted to showing and demonstrating FlexMPI, a framework for HPC malleability, and Limitless, an HPC monitoring system to get information from applications and systems and the usage of AI and ML techniques to steer malleability in systems and applications. Finally, we will show how to apply the solutions presented to two use cases: Wacom++ and Nek5000.
Outline
1. System and system architecture considerations in designing malleable architectures.
2. Emerging software designs to achieve malleability in high-performance computing.
3. High-level parallel programming models and programmability techniques to improve applications’ malleability.
4.- FlexMPI framework for HPC malleability.
5. Limitless: Getting information from applications and systems.
6. Use of AI and ML techniques to steer malleability in systems and applications.
7. Experiences and use cases applying malleability to HPC applications: Wacom++ and Nek5000
Materials
Length
3 hours
Target audience
Jesus Carretero is a Full Professor of Computer Architecture and Technology at Universidad Carlos III de Madrid (Spain), where he is responsible for that knowledge area since 2000 and leader of the Computer Architecture Research Group (ARCOS). He got a PhD in Informatics by Universidad Politécnica de Madrid in 1995. He also serves as Coordinator of the Informatics area for Agencia Española de Investigación since 2020. His research activity is centered on high-performance computing systems, large-scale distributed systems, data-intensive computing, IoT and real-time systems. He is currently coordinating the EuroHPC project ADMIRE, “Adaptive multi-tier intelligent data manager for Exascale”, aiming towards ad-hoc malleable storage systems. He was also Action Chair of the IC1305 COST Action “Network for Sustainable Ultrascale Computing Systems (NESUS)”. He has also participated in the H2020 ASPIDE project and in the FP7 program REPARA. He has participated and leaded several national and international research projects in these areas, founded by Madrid Regional Government, Spanish Education Ministry, and the European Union. He is associated editor of TPDS, ACM CS, and FGCS journals. He has been General chair of CCGRID 2017,
IC3PP2016, or HPCC 2011, and Program Chair of ISPA 2012, EuroMPI 2013, C4Bio 2014, and ESAA 2014, and Applications track vice-chair of SC22.
Many people worldwide are working together to PDP 2023.
General Chairs:
Financial Chair:
Industrial Chairs:
Program Co-chairs:
Proceedings Co-chairs:
Publicity Chairs:
Local arrangements Co-chairs:
Location that you'll be looking for
The conference is hosted at Villa Doria d'Angri, a monumental manor part of the Università degli Studi di Napoli "Parthenope".
Enter Destination From input field below to get directions to our event location
Villa Doria d’Angri
Via Francesco Petrarca 80, Naples, 80123, Italy
The most convenient way to reach our venue is by taxi or car. Bus line C21 connects Mergellina railway station to Villa Doria d’Angri.
Parking for conference attendees is availabele in Villa Doria d’Angri.
If you want to stay near the conference venue, here is a list of suggested accommodations. Please, check room availability on the hotels’ website:
A huge thanks to all our amazing partners. We couldn’t have a conference without you!