Optimized SDN Enabled Live VM Migration - by A Harika
Live VM migration is a technology used for load balancing and optimization of VM deployment in data centers. This process involves the transfer of OS instances, memory and local resources (eg: storage and network interfaces) to another node without shutting down the VM. But there are many unaddressed problems in this domain which arise due to the traditional network architecture.
I. CHALLENGES IN LIVE VM MIGRATION :
- Migration Algorithms should take into account the temporal network load and an end to end paths of pairwise VM flows dynamically, for this purpose the entire topology knowledge(network routes, link weights) must be fed into all the network elements after every topology update.This would couple the entire system too tightly with a given topology.
- Also in traditional architectures network, updates and new forwarding rules after each migration should be installed into each and every network element limiting the network optimization.
II. SDN BASED SOLUTION TO THE ABOVE CHALLENGES:
The main concept of SDN is decoupling the forward plane from the control plane. One can leverage the
The main concept of SDN is decoupling the forward plane from the control plane. One can leverage the
features of SDN to make the live VM migration more flexible. SDN controller can dynamically adapt to
the changing traffic flows and therefore can adjust the network in a short timescale, such as reassigning
the network routes and link weights after the topology change.The logical centralization of the control plane
makes it much more simple to query the global state.
II.A SYSTEM ARCHITECTURE
In the proposed architecture each data center has a network controller, Cloud OS, programmable Gateway and a global orchestrator for inter-data center communication.
Fig. 1 System Architecture
II. B ROLE OF EACH COMPONENT :
- SDN CONTROLLER : The main purpose of network controller is calculating the optimal routing paths and installing the forwarding rules into the network elements (OPENFLOW enabled switches)
- CLOUD-OS : All the VM related tasks like VM management(creation, deletion) are carried out by the CLOUD-OS like OpenStack.
- PROGRAMMABLE GATEWAY : OpenFlow enabled switch used to carry out network address translation for routing the packets to other data centers.
- GLOBAL ORCHESTRATOR : For coordination among network controllers and cloud management systems in multiple data centers. Particularly, the global orchestrator maintains all VMs’ location Information, helps to select the best paths for transferring the migration traffic, and control the network update process.
- .
- VM MIGRATION WITHIN THE DATA CENTER
- 1. VM MIGRATION INITIATION
The entire topology information is maintained within the SDN controller.Whenever VM
migration is initiated Cloud-OS verifies all the migration conditions like if the destination
server has enough capabilities to host the VM either by running algorithms like PUSH,
PULL and HYBRID or by assigning weights to all the network links it calculates the
communication costs and hence decides whether migration is worthwhile.Instead of obtaining link weights for VM pairs in a static manner by relying on a table
instantiated at the beginning, SDN can be used to programmatically calculate link weights
whenever necessary, as the controller maintains a real-time view of the network
links and their status.
If a Link fails, the controller can compensate it by adjusting other relevant link weights
accordingly to avoid other problems such as e.g., logical over-subscription on other links.
2 . UPDATING TOPOLOGYThe cloud management system reports the corresponding configuration to both the
controller and the global orchestrator simultaneously.
As soon as the cloud-os reports both of them update the topology information.optimal
path for forwarding the packets is computed by the controller since it has the global view
of the entire network.
3.SFOP ALGORITHM FOR OPTIMAL PATH CALCULATION:
Fine grain statistics of the openflow interfaces can be used to calculate the optimal path. Firstly, the algorithm starts by computing the widest bandwidth and path. Secondly, it is modified to compare the Feasible bandwidth (FBW) with the found widest bandwidth. Then it compares the if the FBW is less than or equal to the widest bandwidth. If not, then it uses the widest bandwidth itself.Thirdly, it finds the shortest path of the chosen bandwidth. The algorithm computes the optimal path which provides feasible bandwidth and minimum hop count.In the meantime, the algorithm also optimize the utilization of links bandwidth, because the
algorithm will always use the path, which has the feasible bandwidth in the current state of
SDN-WAN and leaves the links, which have the widest bandwidth for best-effort-traffic.
The time complexity for SFOP Algorithm is O(nlogn+l) where n is the number of network nodes
and ‘l’ is the number of links. SFOP algorithm reduces time complexity because in logically
centralized control of SDN it is executed fewer times in comparison to distributed control
of the traditional network.4. INSTALLING FORWARDING RULES INTO THE NETWORK ELEMENTS
If there is no rule matching the incoming packet on switch it is uploaded to the controller.The
controller extracts source MAC and IP addresses as well as destination MAC and IP addresses
from the packet, it then lookups the location of the source and the destination hosts.
If the source and destination hosts are located in the same data center, the controller
computes an optimal path and installs forwarding rules on switches along the path.5. SDN-CACHINGUnnecessary packet processing at the SDN controller can be avoided by the conceptof the SDN caching . SDN cache is implemented using the Northbound API.After migration if any of the packet is forwarded to the destined VM instead ofcalculating the entire optimal path SDN controller can make use of the alreadycached flows based on the MAC address of the destination VM .The caching of each flowwill be identified from its MAC address. Therefore it can be used as a key for a Hash Table.Hash Table will contain set of Flow IDs that are created by the controller. TheCached flows will be stored in another Hash Table and the key for that table wouldbe the Flow ID and object model will be a flow. When a packet arrives at anOpenFlow-enabled switch, a lookup is performed first on the OpenFlow switch’s flowtable. A flow table miss will result in the lookup of the flow cache; if there is a matchin the flow cache, the associated action for packet forwarding will be executed;otherwise a packet_in event is sent to the controller leading to an insertion of anentry in the flow table by the SDN controller.In a dynamic network environment, where the controller performs dynamic re-routing
based on the migration decisions from the cloud-os, the controller could change the
path of the flow .In that scenario, the controller will send flow modification messages
to both flow cache as well as switch to update them.
C. INTER-DATA CENTER MIGRATION:
If the source and destination hosts are not present in the same data center,the controller
sends a request to the global orchestrator to provide the public addresses for source host
and the destination host. The orchestrator responds and sends the allocated address to
network controllers of both data centers where the source and the destination VM locates.
Both controllers then install address translation rules on the programmable gateway switches
and other switches.
Here Network address translation came into picture to deal with the subnet location
expansion problem which means the migrated VM retains its own IP and MAC address even
after migration.The problem with this is VM is allocatedan IP address during its instantiation
from one subnetsu If the VM is migrated to another subnet, Layer 3 routing will fail unless a
global orchestrator with the entire topology knowledge is topology to route this kind of traffic
across the subnets.
As OpenFlow protocol supports actions of modifying packets header, we employ an OpenFlow
enabled switch in each data center as the gateway to carry out address translation, which is
also under control of the controller.IV. CONSISTENCY AMONG THE CONTROLLER & CLOUD-OSAfter migrating a VM from one data center to another, both the data centersupdate routing paths to forward the packets destined to the migrated VM. Thecloud-OS and network controller should coordinate with each other, becausecloud manage system own the VMs’ information and the controller alsomaintains VMs location information for traffic forwarding. Under the scenarioof VM migration, if not well coordinated, they may have inconsistent view ona VM’s location and leads to mis-configurations.
V. CONCLUSIONThe features of SDN can be leveraged in the live VM migration procedureat different stages of migration firstly while calculating the migration costs linkweights can be directly obtained from the SDN controller which has the completeknowledge of the updated topology.Secondly after Migration Optimal path forforwarding the packets from source to destination can be calculated from theSFOP algorithm by using the OpenFlow statistics.Thirdly after migration networkupdates need not to be installed into each and every network device rather theycan be maintained only in the SDN controller which reduces the service downtime.Further reduction in the service downtime can be achieved by using the SDNcaching mechanism to reduce the processing time at the SDN controller.Thus when the new VM is started in the post-migration stage, fewer operations are
Comments
Post a Comment