Wednesday, December 7, 2011

ForCES (Forwarding and Control Element Separation) and Openflow 1.1 - Contrasting them

At very high level, both ForCES and Openflow protocols separate Control plane and Data plane.  Both protocols are intended to drive Software Defined Networking.

Some terminology differences:

  • Software Driven Networking versus Software Defined Networking :  Both are same, but different terms are used.  ForCES uses Software Driven Networking.  Openflow is created in the context of Software Defined Networking. 
  • Control Element and Forwarding Element terms are used by ForCES.  Openflow tend to use the terms Controller (which contains Control plane) and Switch or Datapath for Forwarding Element.  Some people also call the datapath as Fastpath and Dataplane.

Though at high level, both ForCES and Openflow can be considered  part of Software Defined Networking, there is no need for two different protocols.  Eventually, they need to get consolidated into one or one of them would die.  I think two protocol definitions have come into existence due to some conceptual differences on the functionality separation between "Control Plane" and "Data Plane". 

I think it may be difficult to bridge the conceptual differnces,   but I believe that there are some good things in ForCES protocol that can be adopted into Openflow to make Openflow protocol more complete.

First my view of conceptual differences between ForCES and Openflow protocol:

ForCES expects the datapath to have set of LFBs (Logical Functional Blocks).  That is, one forwarding element can contain multiple LFBs.  Each LFB is expected to be defined using inputs to the LFB,  output of the LFB and different components that can be configured in the LFB by the Control Element.  Components within LFB might have multiple tables of flows,  configuration information etc..  Each LFB is identified by ClassID.  Since there can be multiple instances of same LFB,  unique LFB instance is identified by LFB Class ID and LFB instance.  ForCES suite of protocols is going to define several standard LFBs.  As long as vendors develop LFBs as per the standards,  CE will be able to work with any datapath from different vendors. 

ForCES CE mainly connects several LFBs to create a packet flow (topology) to achieve the needed functionality.  CEs also program the each LFB for creating flows etc..  ForCES define the LFBs to output metadata and subsequent LFBs can make use of the metadata in addition to other inputs.

Main point really above is that the ForCES expect logical functional units defined by Datapath.

Openflow differs conceptually on this point.  Openflow protocol does not expect any LFBs.  Openflow expects datapath to have several tables.  Controller creates the flows in those tables with different actions.

It appears that the openflow expects even lower level implementation to be done at the datapath.  LFB kind of functionality is expected to be done by the controller.  Controller has flexibility to define the logical funcitonal units byitself by programming tables and flows.  That is, controller might divide the tables into multiple logical units and each set of tables with flows and associated actions result into what ForCES call it as LFBs. 

So, in essence,  Openflow based SDN expects very low level support by datapaths. 

Having said that, I feel that there would be requirement to define more and more actions in Openflow protocol.  To extend the functionality of datapath,  ForCES expect to create more LFB specifications.  In case of Openflow, I would expect that there would be more actions defined in the openflow protocol in future.

Some Pros and Cons:

In Openflow,  tables can be used for whatever purposes by the Controller.  Purpose of the tables can be changed easily by Controller based on network deployment requirements.  In case of ForCES, if a particular deployment does not require certain LFBs,  resources occupied by those LFBs may not be usable by other LFBs,   There is a possibility of under utilization of datapath resources by Controller in case of ForCES.

But ForCES has structure to the datapath.  That might come in handy in modularity, debugging and maintainability.  May be, there is some thing that can be learnt from ForCES conceptual model. 

One lesson I can think Openflow specification (from Forces) can do is to have some sort of action components.  Controller should be able to find out the type of actions the datapath supports.  Also, it would be great if datapaths have some programmability by which more actions can be uploaded from controller without creating new datapath hardware revisions. Ofcourse, this require common language to represent this so as to provide controller separation from the datapath.  Datapaths need to understand this language and program themselves.

Some good things in ForCES protocol:  Somethings that can be adopted in some fashion into future Openflow protocol specifications are:
  • 2PC Commit Protocol:  This is quite powerful.  In my old job, we did exactly same for similar problem.  I am sure that there would be many instances where a controller need to create several flows in different tables atomically and follow the ACID (Atomicity, Consistency,  Isolation and Durability) properties.  In addition, I also think that there would be a need for ACID support across multiple datapaths where controller needs to create flows across multiple datapaths/switches.  That is where, 2PC Commit protocol is going to help.
  • SCTP transport rather than TCP transport:   I personally like SCTP over TCP for message based protocols.  SCTP is reliable as TCP and maintain the message boundaries.  It is true that SSL based SCTP is not that popular, but some other security can be maintained (IPsec).   SCTP allows easy of providing "Batching",  "Command Pipelining".
  • Extendability:  ForCES followed TLV approach in messages between CE and FE.  Nice thing is that, there are nested TLVs and hence follows XML based nested architecture in binary form.   That is very extendable in future without having major revision to protocols and datapath implementations.   One can argue that Openflow binary protocol takes less bandwidth. That is true.  I was wondering whether we can follow hybrid approach - Known items in fixed header and unknown/future items in TLV format.
  • Selected Fields in LFBs:  Openflow today does not have any granularity on per table basis.  Openflow expects the flows in any table be based on all 15 fields.  I wish newer openflow protocol would specify the fields that are relevant for each table so that the datapath implementation can allocate appropriate flow blocks and utilize memory more effectively. 
  • Table Type:   Openflow specifies that the flows have priority.  It gives impression that these tables maintain flows in order. There are several types of functions that don't need ordered list such as routing tables which can use Trie tables.  Some tables can be Exact match tables (Hash tables can be used in there.  In case of ForCES, this issue is not present as each LFB can implement its own tables in its own fashion as long as outside behavior is same.  In Openflow, LFBs are logically part of the controller and controller only sees the tables.  I like to see Openflow table definition to have "ACL table" (Similar to what Openflow 1.1 defined today),  "LPM Table" and "Exact Match" Table.  

1 comment:

chesteve said...

Nice post!

You raised good points on the OpenFlow vs ForCES approaches.

There is some recent discussion going on in IETF, note however that IMO the comparison may not be fully accurate (yet):
May be you can contribute to the draft!

Also your latest post on Java and the call for modularity to address complexity are valuable!

With regard to the evolution of the OpenFlow spec there are some good news on the horizon:
- Version 1.2 (to be released in a matter of days) includes now extensibility with TLV-based data structures. 
- Work towards 2.0 has also started and touches your concerns on typed tables, efficient data structures for the data path, pipeline modeling and so on.  See a recent presentation: