Optimized SDN Enabled Live VM Migration - by A Harika

Live VM migration is a technology used for load balancing and optimization of VM deployment in data centers. This process involves the transfer of OS instances, memory and local resources (eg: storage and network interfaces) to another node without shutting down the VM. But there are many unaddressed problems in this domain which arise due to the traditional network architecture.  


I. CHALLENGES IN LIVE VM MIGRATION :

  • Migration Algorithms should take into account the temporal network load and an end to end paths of pairwise VM flows dynamically, for this purpose the entire topology knowledge(network routes, link weights) must be fed into all the network elements after every topology update.This would couple the entire system too tightly with a given topology.
  • Also in traditional architectures network, updates and new forwarding rules after each migration should be installed into each and every network element limiting the network optimization.

II. SDN BASED SOLUTION TO THE ABOVE CHALLENGES:

The main concept of SDN is decoupling the forward plane from the control plane. One can leverage the
features of SDN to make the live VM migration more flexible. SDN controller can dynamically adapt to
the changing traffic flows and therefore can adjust the network in a short timescale, such as reassigning
the network routes and link weights after the topology change.The logical centralization of the control plane
makes it much more simple to query the global state.
II.A SYSTEM ARCHITECTURE

In the proposed architecture each data center has a network controller, Cloud OS, programmable Gateway and a global orchestrator for inter-data center communication.

       Fig. 1 System Architecture


II. B ROLE  OF EACH COMPONENT :
  • SDN CONTROLLER : The main purpose of network controller is calculating the optimal routing paths and installing the forwarding rules  into the network elements (OPENFLOW enabled switches)
    • CLOUD-OS : All the VM related tasks like VM management(creation, deletion) are carried out by the CLOUD-OS like OpenStack.
    • PROGRAMMABLE GATEWAY : OpenFlow enabled switch used to carry out network address translation for routing the packets to other data centers.
    • GLOBAL ORCHESTRATOR : For coordination among network controllers and cloud management systems in multiple data centers. Particularly, the global orchestrator maintains all VMs’ location Information, helps to select the best paths for transferring the migration traffic, and control the network update process.
    • .
  • VM MIGRATION WITHIN THE DATA CENTER

  • 1. VM MIGRATION INITIATION

    The entire topology information is maintained within the SDN controller.Whenever VM
    migration is initiated Cloud-OS verifies all the migration conditions like if the destination
    server has enough capabilities to host the VM either by running algorithms like PUSH,
    PULL and HYBRID or by assigning weights to all the network links it calculates the
    communication costs and hence decides whether migration is worthwhile.

    Instead of obtaining link weights for VM pairs in a static manner by relying on a table
    instantiated at the beginning, SDN can be used to programmatically calculate link weights
    whenever necessary, as the controller maintains a real-time view of the network
    links and their status.

    If a Link fails, the controller can compensate it by adjusting other relevant link weights
    accordingly to avoid other problems such as e.g., logical over-subscription on other links.

    2 . UPDATING TOPOLOGY
    The cloud management system reports the corresponding configuration to both the
    controller and the global orchestrator simultaneously.

    As soon as the cloud-os reports both of them update the topology information.optimal 
    path for forwarding the packets is computed by the controller since it has the global view 
    of the entire network.

    3.SFOP ALGORITHM FOR OPTIMAL PATH CALCULATION:

    Fine grain statistics of the openflow interfaces can be used to calculate the optimal path. Firstly, the algorithm starts by computing the widest bandwidth and path. Secondly, it is modified to compare the Feasible bandwidth (FBW) with the found widest bandwidth. Then it compares the if the FBW is less than or equal to the widest bandwidth. If not, then it uses the widest bandwidth itself.Thirdly, it finds the shortest path of the chosen bandwidth. The algorithm computes the optimal path which provides feasible bandwidth and minimum hop count.


    In the meantime, the algorithm also optimize the utilization of links bandwidth, because the
    algorithm will always use the path, which has the feasible bandwidth in the current state of
    SDN-WAN and leaves the links, which have the widest bandwidth for best-effort-traffic.

    The time complexity for SFOP Algorithm is O(nlogn+l) where n is the number of network nodes
    and ‘l’ is the number of links. SFOP algorithm reduces time complexity because in logically
    centralized control of SDN it is executed fewer times in comparison to distributed control
    of the traditional network.

    4. INSTALLING FORWARDING RULES INTO THE NETWORK ELEMENTS

    If there is no rule matching the incoming packet on switch it is uploaded to the controller.The
    controller extracts source MAC and IP addresses as well as destination MAC and IP addresses
    from the packet, it then lookups the location of the source and the destination hosts.

    If the source and destination hosts are located in the same data center, the controller
    computes an optimal path and installs forwarding rules on switches along the path.

    5. SDN-CACHING

    Unnecessary packet processing at the SDN controller can be avoided by the concept
    of the SDN caching . SDN cache is implemented using the Northbound API.

                         
                                    Fig. 2 SDN Cache as Northbound Application

    After migration if any of the packet is forwarded to the destined VM instead of
    calculating the entire optimal path SDN controller can make use of the already
    cached flows based on the MAC address of the destination VM .The caching of each flow
    will be identified from its MAC address. Therefore it can be used as a key for a Hash Table.

                                                         
                                                             Fig. 3 Flow cache table

    Hash Table will contain set of Flow IDs that are created by the controller. The
    Cached flows will be stored in another Hash Table and the key for that table would
    be the Flow ID and object model will be a flow. When a packet arrives at an
    OpenFlow-enabled switch, a lookup is performed first on the OpenFlow switch’s flow
    table.  A flow table miss will result in the lookup of the flow cache; if there is a match
    in the flow cache, the associated action for packet forwarding will be executed;
    otherwise a packet_in event is sent to the controller leading to an insertion of an
    entry in the flow table by the SDN controller.


    In a dynamic network environment, where the controller performs dynamic re-routing
    based on the migration decisions from the cloud-os, the controller could change the
    path of the flow .In that scenario, the controller will send flow modification messages
    to both flow cache as well as switch to update them.

    C. INTER-DATA CENTER MIGRATION:
    If the source and destination hosts are not present in the same data center,the controller
    sends a request to the global orchestrator to provide the public addresses for source host
    and the destination host. The orchestrator responds and sends the allocated address to
    network controllers of both data centers where the source and the destination VM locates.
    Both controllers then install address translation rules on the programmable gateway switches
    and other switches.

    Here Network address translation came into picture to deal with the subnet location
    expansion problem which means the migrated VM retains its own IP and MAC address even
    after migration.The problem with this is VM is allocatedan IP address during its instantiation
    from one subnetsu If the VM is migrated to another subnet, Layer 3 routing will fail unless a
    global orchestrator with the entire topology knowledge is topology to route this kind of traffic
    across the subnets.

    As OpenFlow protocol supports actions of modifying packets header, we employ an OpenFlow
    enabled switch in each data center as the gateway to carry out address translation, which is
    also under control of the controller.

    IV. CONSISTENCY AMONG THE CONTROLLER & CLOUD-OS
    After migrating a VM from one data center to another, both the data centers
    update routing paths to forward the packets destined to the migrated VM. The
    cloud-OS and network controller should coordinate with each other, because
    cloud manage system own the VMs’ information and the controller also
    maintains VMs location information for traffic forwarding. Under the scenario
    of VM migration, if not well coordinated, they may have inconsistent view on
    a VM’s location and leads to mis-configurations.


    V. CONCLUSION

    The features of SDN can be leveraged in the live VM migration procedure
    at different stages of migration firstly while calculating the migration costs link
    weights can be directly obtained from the SDN controller which has the complete
    knowledge of the updated topology.Secondly after Migration Optimal path for
    forwarding the packets from source to destination can be calculated from the
    SFOP algorithm by using the OpenFlow statistics.Thirdly after migration network
    updates need not to be installed into each and every network device rather they
    can be maintained only in the SDN controller which reduces the service downtime.
    Further reduction in the service downtime can be achieved by using the SDN
    caching mechanism to reduce the processing time at the SDN controller.
    Thus when the new VM is started in the post-migration stage, fewer operations are
    needed to forward related packets to new location.


Comments

Popular posts from this blog

Spatial modulation by group 1 - P. Sruti,G. Soumya, T. Bhavani, V.Rishitha, S. Indiramma

Multihop Network Routing Using NS3 simulation and python #GROUP-7

GROUP 12 : PC to PC file transfer using LiFi Technology