Prof. Fatos Xhafa |
||
Title of the Talk: Distributed Intelligent Edge via Federated Learning Abstract: The latest advances of Cloud-to-thing continuum computing support the Distributed Intelligent Edge, which aims at using the end devices, at the edges of the Internet, to offload task processing and build intelligence close to users. In this talk, we will discuss some offload computing models in Cloud-to-thing continuum, its challenges and opportunities for the Distributed Intelligent Edge, processing and analyzing the IoT data streams in real time to build intelligence at the Edge. A real-life scenario from classifying users’ emotional states from sensorial multimodal data streams via Federated Learning will be discussed.
|
Prof. Tamas Kiss |
||
Title of the Talk: The evolution of application orchestration in the Cloud-to-Edge continuum Abstract: The emergence of large amounts of data collected at the edges of the network by various Internet of Things (IoT) devices requires new and innovative solutions for data processing. Sending all this data to central cloud servers increases latency and network traffic. Therefore, various additional processing layers, edge and fog computing nodes have been introduced to provide compute capacity closer to the data sources. However, the introduction of such additional layers also increases complexity. In case of applications incorporating data from many IoT devices and utilising edge, fog and cloud computing nodes, the automated and optimised deployment and run-time management of the application’s microservices becomes crucial. Such tasks are typically managed by so-called cloud-to-edge orchestrators. A Cloud-to-Edge orchestration system is responsible for the automation of application deployment and their runtime management by providing simultaneous access to the heterogeneous resource landscape of the computing continuum. Most currently available orchestration tools are based on a centralised execution model. Such a model, while relatively easy to implement, carries several disadvantages. For example, it is a single point of failure, an easy target for security attacks, and the central controller can be easily overloaded as the system scales. Furthermore, the centralised execution model is not a natural fit considering the distributed and dynamically changing nature of the cloud-to-edge computing continuum. Therefore, recent research efforts shifted towards a decentralised, autonomous, secure and self-organised application orchestration model by combining and extending emerging technologies, such as Swarm computing, distributed AI, distributed ledger systems and decentralised identity management. This keynote talk will overview the evolution of orchestrator tools from centralised cloud and cloud-to-edge orchestrators towards fully decentralised solutions and it will demonstrate this evolution via the example of a cloud orchestrator family developed and applied is several European funded collaborative research projects.
|
Prof. Dhabaleswar Panda |
||
Title of the Talk: Designing Converged Middleware for HPC, AI, Big Data, and Data Science Abstract: This talk will focus on challenges and opportunities in designing converged middleware for HPC, AI (Deep/Machine Learning), Big Data, and Data Science. We will start with the challenges in designing runtime environments for MPI+X programming models by considering support for multi-core systems, high-performance networks (InfiniBand, RoCE, Slingshot), GPUs (NVIDIA, AMD, and Intel), and emerging BlueField-3 DPUs.
Features and sample performance numbers of using the MVAPICH libraries over a range of benchmarks will be presented.
For the Deep/Machine Learning domain, we will focus on MPI-driven solutions (MPI4DL) and Mix-and-match Communication Runtime (MCR-DL) to extract performance and scalability for popular Deep Learning frameworks (TensorFlow and PyTorch), large out-of-core models, and parallel inferencing. Finally, we will focus on MPI-driven solutions to accelerate Big Data applications (MPI4Spark) and data science applications (MPI4Dask) with appropriate benchmark results will be presented.
|
Prof. Kalyanmoy Deb |
||
Title of the Talk: Implicit Parallelism in Evolutionary Multi-Criterion Optimization Algorithms Abstract: Evolutionary multi-criterion optimization (EMO) algorithms
attempt to find multiple Pareto-optimal solutions simultaneously using a
population-based evolutionary computation (EC) principle. Instead of
finding a single solution, EMO algorithms allow human decision-makers to
analyze a set of alternate solutions before choosing a single preferred
solution for implementation. The success of EMO algorithms in solving
two to 20-objective problems come from their implicit parallel search
properties. In this keynote, we shall introduce the implicit parallel
concept in the context of EMO algorithms and support the arguments with
simulation results. Flipping the idea of implicit parallelism, we shall
also discuss the concept of explicit parallelism through
decomposition-based and distributed computing based algorithms with
support from simulation results. These concepts will provide a clear
understanding of the working of EMO algorithms and motivate participants
to get into the growing field of EMO research and application.
|
Prof. D.P. Vidyarthi |
||
Title of the Talk: Fog Device Deployment for Maximal Network Connectivity and Edge Coverage using
JAYA Algorithm Abstract: Fog computing emerged to address the limitations and challenges of traditional Cloud computing, particularly in handling real-time, heterogeneous, and latency-sensitive applications. However, the spread of Fog computing devices across the network introduces various challenges, especially concerning device connectivity and ensuring sufficient coverage to fulfil users’ requests. To maintain network operability, Fog Device Deployment (FDD) must effectively consider two crucial factors: connectivity and coverage. Network connectivity relies on FDD, determining the physical network topology, while coverage determines the accessibility of the Internet of Things (IoT) or edge devices. Both these objectives significantly impact the network performance and guarantee the network's Quality of Service (QoS). However, determining an optimal FDD method that reduces computation and communication overhead, and provides high network connectivity and coverage, is challenging. In this work, we propose an FDD algorithm that effectively connects the Fog devices for internal communication and covers maximum edge devices to entertain the requests. Firstly, FDD is formulated as a multi-objective optimization problem and then, an emerging metaheuristic Jaya Algorithm (JA) is applied to optimize the multi-objective function. The suitability of the JA, for the FDD problem, is substantiated by its rapid convergence and better computational complexity when contrasted with other contemporary population-based algorithms. In conclusion, the performance of the proposed method is assessed across a spectrum of benchmark-generated instances, each reflecting distinct Fog scenarios. The experimental outcomes showcase the proposed method's remarkable promise, especially when compared against state-of-the-art methodologies.
|
Prof. Kishore Kothapalli, Dean (Academics) |
||
Title of the Talk: Recent Progress and Challenges in Parallel Dynamic Graph Algorithms Abstract: Graphs are a useful mechanism to capture several real-world phenomena. Examples include social networks, epidemiological networks, collaboration networks, and the like. This allowed for a large body of work on graph algorithms and the availability of many datasets of real-world graphs to experiment on graph algorithms. However, graphs corresponding to real-world phenomena evolve with time due to changes in the underlying phenomena. For instance, social networks evolve as people locate more friends, and collaboration networks evolve as more researchers collaborate with each other.
Efficient parallel algorithms for evolving, dynamic graphs are essential to address the scale issue. These dynamic graph algorithms aim to update the graph analytics due to changes in the underlying graph without resorting to full-scale recomputation, which is often time-consuming. One classifies dynamic algorithms as in- cremental algorithms, decremental algorithms, or fully dynamic algorithms if the update adds edges/vertices, deletes edges/vertices, or can both add/delete edges/vertices to the underlying graph, respectively.
There has been considerable progress in designing parallel algorithms for dynamic graphs in recent years. Such algorithms are known for various problems, including maintaining connected components, biconnected components, computing centrality scores, shortest paths, and the like. In the parallel setting, these algorithms consider a batch update model where a batch of edges are added/deleted to the underlying graph.
In this talk, we will review some of the technical commonalities in the above works. Many of the parallel dynamic algorithms rely on identifying the entities of the graph that are affected by the update, running a computation on the set of affected entities, followed by optional post-processing. We will outline two approaches that help parallel dynamic graph algorithms identify the set of affected entities. The first approach uses specific algorithmic properties, whereas the second uses iterative, frontier-based approaches.
We illustrate the two approaches via examples of problems such as centrality metrics and PageRank. We also dwell on the nature of problems that these approaches tend to be useful. We also show some experimental results of the two approaches and study the benefits and limitations of these approaches.
Subsequently, the talk discusses difficulties in testing parallel dynamic graph algorithms. To this end, we present a solution via tools for generating dynamic graph instances. The tool allows researchers to generate dynamic graphs according to multiple probability distributions in addition to bounding some of the structural properties of the graphs thus generated. This tool helps researchers study dynamic graph algorithms and make such studies reproducible. This talk concludes by outlining important problems in this domain for future work.
|
Dr. Apurba Das |
||
Title of the Talk: Perception Engineering through Vision AI and Generative AI: Research to Industry Deployment Abstract: Today’s world of IoT enabled with AI doesn’t only impact the way “things” are being used, but also “lives” are being changed, wholistically. From the age of machine learning through hand-crafted features, there was a significant transition to the age of deep learning where along with classifier, features also could be learnt. Now, through the advancement of transformers and large language model (LLM), AI has entered the world of creativity for the first time. From “writing code” through “providing data” to “asking right question (prompt)” is a fascinating journey in the technology domain of AI era.
Whenever human attempted to define AI, it was either towards designing an expert system or imitating the human intelligence itself, with known boundary. Whatever be the case, reacting to the changing environment has been a core task of AI. To react to the environment intelligently, it is foremost important to perceive the environment intelligently. Hence, engineering the perception is of critical importance. Out of 5 of our senses, vision is undoubtedly most important, most complex and most accurate sense. Hence, Vision AI plays most important role in sensing the environment intelligently by an intelligent machine. However, not always vision alone can decision to react/ respond. Many a times, it is a fusion of other senses as well to determine observed environment and react.
Applicability of perception-based Vision AI, and Generative AI in combination in different industries solving different real-life problems would be discussed. In manufacturing, Vision AI is ensuring process adherence by real-time multi-modal micro-activity recognition by motion analytics; in retail, retail theft is detected through Vision AI agents; in logistics optimal space utilization is ensured through vision perception engineering; solving 1000s of such problem in very quick turnaround time.
In this talk, Dr. Das will throw light on the fascinating technology of Vision AI and Gen AI from an industry expert’s viewpoint. His more than 2 decades of experience in this technology would help the presenter to stitch the journey from industry challenges through research and final deployment of the solution. Most important factor of AI solution realization and revolution is definitely the availability of large compute infrastructure. Inferring LLM or VLM (Vision Language Model) or Visual AI agents in a production framework requires significant GPU horsepower yet appreciated because of the business value it realizes.
|