Today, I spent the day at Tellabs PoC (Proof of Concept) showcasing their DWDM and Mobile Backhaul portfolio. Tellabs had put up an impressive collection of equipment showing an array of portfolio including 7100 Nano, 7300 Carrier Ethernet portfolio and 8600 series for Mobile backhaul. I was particularly interested to see their backhaul equipment in action. Tellabs is one of those that advocate IP/MPLS for Mobile backhaul. They proudly boast of making IP/MPLS quite easy to work with for backhaul people. As a technology, I do consider MPLS to be quite complex for transport people to handle in backhaul. Transport people would need something easy to work with, without dealing with the complexities of IP routing, addressing and signaling protocols. I had written an earlier post that the TCO of MPLS is quite high compared to MPLS-TP. The post is here
However Tellabs has come up with some tools to handle the complexities of IP RAN backhaul.
The tool that makes MPLS easy to work with, is the NMS of Tellabs called INM (Intelligent Network Manager). I have seen a couple of other vendors doing MPLS for LTE backhaul but they do it with CLI scripts, which make the job of configuring and provisioning quite cumbersome. INM does everything including configuring, provisioning, troubleshooting and reporting from central place with easy to use graphical interface. There is a cell site wizard that automates cell site provisioning and pseudowires connections. Additionally, impressively there is an intelligent troubleshooting wizard that can be called upon to suggest solutions if one comes across issues while configuring and provisioning of services.
Indeed Tellabs has made quite an investment in their NMS tool to make it user friendly for its users. The comparison with MPLS-TP will not be fair here since MPLS-TP is designed to be totally NMS driven from the beginning minus the complexities of control plane. However the INM tool does make the job of dealing with IP routing and control plane quite easier for the people that come from transport background.
There are some confusions surrounding the differences between SDN and NFV. Both terms relate to virtualization of network; both are hot topics these days. Sometimes, one concept is mixed with the other.
SDN stands for Software Defined Network. It is based on two pillars.
- Separation of data and control plane and centralization of control plane.
- Programming network with open interfaces.
SDN concepts came into being while researchers were frustrated with changing software in network elements every time they wanted to try something new to study behavior in network. They thought, why not program the network elements and manage them from central place. The language that brings this programmability to network is called “Open flow”.
NFV ( Network Functions Virtualization) on the other hand is about virtualizing network elements by using commodity servers so as to reduce operator’s inventory, power and space requirements and hence reduce CAPEX and OPEX.NFV is driven by operators and the concepts were put forward originally by a group of service providers through a white paper.
So what is the difference between the two?
It is commonly understood that Both SDN and NFV are totally independent from each other. This is partially true. Especially, when SDN concepts are taken, they do draw relevance from NFV. While separating control and forwarding functions and programmability are core concepts in SDN; these cannot be achieved unless network elements become commoditized and their functions virtualized- the core concept of NFV. However it can be said that NFV can exist without depending on SDN; however, even though network elements are commoditized but still network will not be efficient if there is no network programmability as desired in SDN. Therefore SDN and NFV help each other and if implemented together can make the network flexible, virtual and commoditized.
Research firm, Heavy Reading, defines “P-OTS as a platform that combines SONET/SDH, connection-oriented Ethernet, DWDM and, depending on where the platform is used within the network, also optical transport network (OTN) switching and reconfigurable optical add-drop multiplexers (ROADMs)”
As per Heavy reading, there is a growing trend now, with more and more operators asking about IP/MPLS capabilities in P-OTS platform. Transport groups inside operators have expressed their interest to add layer 3 functionality to P-OTS platforms. Operators favor integrating L0 to L3 in P-OTS platform so that one platform can meet all their needs. This new requirement is addressed in P-OTS 2.0. According to Heavy Reading, there are four differentiating features of P-OTS 2.0 compared to legacy P-OTS.
- There is change of focus from TDM to packet functions.
- Pure packet implementation of P-OTS is ramping up.
- 100G is seeing more applications in metro area
- Switched OTN has entered Metro area, removing the need of SDH fabric in new network elements.
Looking at what is present in the market today; the P-OTS platforms leave a lot to be desired. It would be hard to find a product optimized for all layers. Legacy P-OTS platforms are optimized for certain applications but not for all. Depending on where the vendor is coming from, some P-OTS platforms are optimized for TDM applications but not for ethernet applications. Some are strong in DWDM but not in packet and TDM. Some vendors position to carry Ethernet over OTN; still others vouch that carrying Ethernet in native form is better and keeping OTN as option on P-OTS platform. P-OTS, thus, has become more of a marketing term rather than the platform that can address all the needs, desired for such platforms.
P-OTS 2.0, therefore, should be able to address all layers (layer 0 to 3) effectively and in optimized way. Operators would not like to have platforms (similar to P-OTS) that are geared and focused on few applications and leaving the others as just “supported on platform”. P-OTS platform shall be modular and platform based preferably supporting a family of platforms for applications from metro to core. Cards should be interchangeable between platforms. The platforms should support IP/MPLS but above all GMPLS across Layer-0 to Layer-3. Again it should be possible to run all layers independently or all layers run tightly together, seamlessly. The layers should communicate with one another, for example, for fault management. These are the needs of the operators today and vendors ought to rise to this occasion.
API is the most powerful feature of SDN- the programmable network. It will open a world of opportunities in terms of flexibilities and features for both operators and app developers. The current networks are rigid and make operators “lock in” to vendors. SDN will help avoid this lock in by making a network a flexible platform which will respond as operators desire. This will reduce OPEX that comes because of manual provisioning of services and will help in bringing innovation because the operators may develop new applications easily and flexibly.
SDN focuses on Network as a service or what is termed as “NaaS” in cloud computing. With Naas operators will have control over bandwidth, routing and QoS of their data. They will be able to give differentiated services. With SDN, operators can leverage the existing NaaS initiatives and build up their SDN infrastructure from there. The knowledge and skills gained with NaaS today will be and prove very useful to develop SDN ecosystem of apps. The experience of these operators can help the new comers into SDN to learn from them.
In cloud, lie opportunities for carriers. What traditional cloud has not been able to achieve, carrier cloud has all those ingredients to deliver. What makes carrier cloud different than traditional public and enterprise clouds?
First few words about what is public and enterprise cloud? Public Cloud are designed on the concept of “ Scale Out” i.e. as applications and users grow, more processing power and storage is needed, this necessitates adding more servers to cloud. These are commodity servers, so building and expanding network in this way is neither expansive nor time consuming. The resilience is always brought in by software and not hardware; in fact hardware is even dubbed as “Designed-for-failure”. Enterprise cloud, on the other hand, is built on the concept of “Scale up”. Processors and Storage capacity in cloud are upgraded rather than “added” as in public clouds. Since many enterprises are using legacy software which are not designed for the usual “Scale Out” concept of public clouds, therefore such enterprises opt for enterprise cloud which is least disrupting and does not need any forklift upgrades. Resilience, here, is based on hardware rather than software. Read the rest of this entry