Posters on Theme 2

Theme 2 RESOURCE CONTROL & MANAGEMENT

Posters
“Task Assignment Across Clouds by Graph Partitioning” Ravneet Kaur, John Chinneck, Murray Woodside, Carleton University
“Simulation of Future Application-Aware Multi-Clouds” Derek Hawker, John Chinneck, Murray Woodside, Carleton University
“Relational Edge Caching for Edge-Aware Web Applications” Hemant Saxena, Kenneth Salem, University of Waterloo
“Optimal Service Replica Placement via Predictive Model Control” Hamoun Ghanbari, Przemyslaw Pawluk, Cornel Barna, Marin Litoiu, York University
“An Architecture for Mitigation Low and Slow Application DDoS Attacks” Mark Shtern, Roni Sandel, Vasileios Theodorou, Marin Litoiu, York University, Chris Bachalo, Juniper Networks
“Toward a Cost-Throughput Metric for Finding Effective Application Deployments in Cloud” Przemyslaw Pawluk, Mark Shtern, Rizwan Mian, Marin Litoiu, York University
“Model-driven Elasticity and DoS Attack Mitigation in Cloud Environments” Cornel Barna, Mark Shtern, Hamoun Ghanbari, Michael Smit, Marin Litoiu, York University
POSTER & DEMONSTRATION
“Cloud bursting for MapReduce jobs: A dream or a reality?” Rizwan Mian, Mark Shtern, Saeed Zareian, Marin Litoiu, York University
“Crowd-sourcing Sensor Data” Brad Simmons Saeed Zareian, Manish Garg, Mark Shtern, Rizwan Mian, Marin Litoiu, York University
DEMONSTRATIONS
“Mitigation of Low and Slow Application DDoS Attacks with SDI” Cornel Barna, Mark Shtern, Marin Litoiu, York University, Chris Bachalo, Juniper Networks

POSTERS:

Task Assignment Across Clouds by Graph Partitioning

Ravneet Kaur, John Chinneck, Murray Woodside, Carleton University

Task assignment in cloud computing normally ignores inter-task communications on the assumption that this is a minor effect, but communications latency can have a big impact in newer cloud architectures where the cloud consists of multiple computing centers of various sizes and the inter-cloud communication times are not negligible. We study the case of task partitioning between a main “core” cloud and a smaller “edge” cloud closer to the end user, where the edge-core communication time is not negligible. Transactions having heavy interaction with the user are best placed in the edge cloud, and those requiring heavy computation with less communication are best placed in the core. We propose iterative graph partitioning methods that assign tasks in the task graph to the edge or core by minimizing a cut related to communication volumes and processing capacity. Initial experimental results are promising.

Simulation of Future Application-Aware Multi-Clouds

Derek Hawker, John Chinneck, Murray Woodside, Carleton University

We have extensively modified the DCSim cloud simulator for use in examining the impact of latency on the response times of applications running on multi-cloud architectures. The main new features are the integration of the analytic Layered Queuing Network Solver (LQNS), and an expanded application model based on Layered Queuing Networks (LQN).

Together these provide response times based on an LQN application model that reflects how the underlying VM deployment creates latencies between tasks. Also new is support for the creation of a variety of multi-cloud architectures.

Relational Edge Caching for Edge-Aware Web Applications

Hemant Saxena, Kenneth Salem, University of Waterloo

Web application latencies can be reduced by migrating all or parts of the application to the network edge, closer to end users. However, web applications normally depend on a back-end database, and moving the application to the edge without moving the database is of little value.

To address this problem, we present preliminary work towards an edge-aware dynamic data replication architecture for relational database systems. Our architecture is designed to support applications that rely on substantial amounts of end-user-generated content. To do so, it must allow database updates as well as queries, to be handled at the edge.

Optimal Service Replica Placement via Predictive Model Control

Hamoun Ghanbari, Przemyslaw Pawluk, Cornel Barna, Marin Litoiu, York University

We present a model and an algorithm for optimal service placement (OSP) of a set of N-tier software systems. The placement is subject to dynamic workload changes, Service Level Agreements (SLAs) and administrator preferences. The objective function consists of resource costs, trashing costs and SLAs’ satisfaction. The optimization algorithm is predictive: its allocation or reallocation decisions are based not only on the current metrics but also on the predicted evolution of the system.
The solution of the optimization, in each step, is a set of some service replicas to be added or removed from the available hosts. These deployment changes are optimal with regards to overall objectives defined over time.

An Architecture for Mitigation Low and Slow Application DDoS Attacks

Mark Shtern, Roni Sandel, Vasileios Theodorou, Marin Litoiu, York University, Chris Bachalo, Juniper Networks

Distributed Denial of Service (DDoS) attacks are a growing threat to organizations. As defense mechanisms are advancing, hackers in turn are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because they are harder to detect due to low resource consumption.

In this poster, we propose a reference architecture that mitigates the Low and Slow DDoS attacks by utilizing Software Defined Infrastructure capabilities. Further, we propose two implementations of the reference architecture based on a Performance Model and Off-The-Shelf Component, respectively. We present the Shark Tank concept, a cluster under close scrutiny, where suspicious requests are redirected for further filtering.

Toward a Cost-Throughput Metric for Finding Effective Application Deployments in Cloud

Przemyslaw Pawluk, Mark Shtern, Rizwan Mian, Marin Litoiu, York University

Existence of the different cloud vendors pose significant barrier for cloud users in making an application deployment decision. The cloud vendors expose non-standard service abstractions that have custom properties. This makes the comparison of clouds on the performance of the application a non-trivial task. As a result, it is difficult to determine appropriate resources and budget for an application satisfying SLAs across different clouds. Therefore, complex modelling methods are used that require expensive profiling or expert knowledge. Unfortunately, these methods are usually beyond the reach of a typical cloud.

In this poster, we propose an approach in selecting a topology based on the historical data. Our approach avoids expensive profiling, and uses historical data that has been collected as a by-product of application monitoring and management. We expect cloud vendors to immediately benefit from our approach since they have immediate and easy access to the application performance metrics. In fact, cloud vendors can offer advisory service based on our approach. Our method also benefits users who run multiple applications in the cloud and harness monitored data in improving existing or new application deployments.

Model-driven Elasticity and DoS Attack Mitigation in Cloud Environments

Cornel Barna, Mark Shtern, Hamoun Ghanbari, Michael Smit, Marin Litoiu, York University

Workloads for web applications can change rapidly. When the change is an increase in customers, a common adaptive approach to uphold SLAs is elasticity, the on-demand allocation of computing resources. However, application-level denial-of service (DoS) attacks can also cause changes in workload, and require an entirely different response. These two issues are often addressed separately (in both research and application).

This poster presents a model-driven adaptive management mechanism which can correctly scale a web application, mitigate a DoS attack, or both, based on an assessment of the business value of workload. This approach is enabled by modifying a layered queuing network model previously used to model data centers to also accurately predict short-term cloud behavior, despite cloud variability over time.

POSTER & DEMONSTRATION:

Cloud bursting for MapReduce jobs: A dream or a reality?

Rizwan Mian, Mark Shtern, Saeed Zareian, Marin Litoiu, York University

Cloud-bursting has been mostly explored for computational workloads, or assumes that the data already exists in the public clouds. We take a step towards data-intensive job bursting between clouds. In particular, we explore the practicality and usefulness of MapRedce (MR) job bursting between clouds assuming that there is high bandwidth between clouds. The experiments for MR bursting are conducted in a hierarchical cloud infrastructure, namely SAVI, in which edge and core are interconnected by high bandwidth links.

Crowd-sourcing Sensor Data

Saeed Zareian, Manish Garg, Mark Shtern, Rizwan Mian, Marin Litoiu, York University

The Connected Vehicles and Smart Transportation (CVST) project is establishing a flexible and open application platform. It aims to integrate advanced wireless and sensor communications with mobile computing techniques in a cloud-based environment. As one of practical steps, we build a mobile application that interacts with a data management system to advise travel conditions. Our aim is to alleviate transportation issues faced by the commuters and the government by analyzing the data obtained by crowd-sourcing of mobile users.

In our implementation, we use PlayFramework that uses Akka and Netty for fast and lightweight scalability and response, alongside HBase for scalability in the data layer. In addition, we use Apache Cordova in our mobile application for improved portability across different brands of mobile phones.

DEMONSTRATIONS:

Mitigation of Low and Slow Application DDoS Attacks with SDI

Cornel Barna, Mark Shtern, Marin Litoiu, York University, Chris Bachalo, Juniper Networks

Distributed Denial of Service (DDoS) attacks are a growing threat to organizations. As defense mechanisms are advancing, hackers in turn are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because they are harder to detect due to low resource consumption.

In this demo, we show an implementation of a reference architecture that mitigates the Low and Slow Distributed Denial of Service attacks by utilizing Software Defined Infrastructure capabilities.