Wednesday, December 23, 2015

Nice tutorial presentation on Openstack Horizon with AngularJS

Good textual information here:

Pasting the relevant text here:

One of the main areas of focus for Horizon has been around better user experience. Horizon has been moving towards AngularJS as a client-side architecture in order to provide better, faster feedback to users, decrease the amount of data loading on each request, and push more of the UI onto the client side.
For the Kilo release, the team is working on a reference implementation to dictate how AngularJS will be implemented in Horizon.
Improved Table Experience
Currently, filtering and pagination within Horizon are inconsistent. For Kilo, the table experience will be improved by much of it to the client side, caching more data on the client and providing pagination and filtering in the Horizon code instead of relying on the APIs to do it.
David hopes this results in a more consistent user experience, and takes the guesswork out of understanding the paradigm on each page.
There is a wizard in Horizon currently, but it's primitive and it has remained largely stagnant through Havana and Icehouse. David said he plans to refocus on that in this release because it's one of the biggest usability issues Horizon faces -- the launch instance workflow is confusing and requires a lot of upfront knowledge.
The Horizon team has been working with the OpenStack UX team to design a better workflow that will be implemented on the client side. The team will use the network wizard as well to make sure they have an extensible format and an extensible widget.
Refine plugin support
It’s important that Horizon has a clean plug-in mechanism that doesn’t involve editing horizon source code.
Right now it requires a user to add files into the directory where Horizon gets deployed; this isn't optimal because it causes problems when a user wants to update a package. The Kilo version of Horizon will have refined plug-in support that allows users to point to where the plug-in mechanism is and provide better Angular support.
Better theming
One concern that operators have is that they don't want to ship Horizon with the same UI as everyone else, especially if they’re putting it in front of customers. David said there's a need to be able to theme it better without having to hack the code.
The team plans to continue development and provide an improved theming mechanism that's easy to work with.

Friday, December 4, 2015

Breaking Diffie Hellman security with massive computing

Following blogpost and paper talk one possible method NSA would have used to read IPsec and SSL encrypted data.

And detailed paper here:

Both IPSec and SSL/TLS use DH algorithm to get hold of shared secret without sharing the actual secret.  Please see my previous post on this : note is that prime number 

I don't understand all mathematical details presented in the paper.  But important thing to note is that in IPSec and SSL,  DH prime number and  base numbers are not secret.  They are sent in clear between the parties.  Prime numbers are even advertised. For example,  IPSec RFC 3256 defines the prime numbers for various DH Groups.

According to the paper:

"The current best technique for attacking Diffie-Hellman relies on compromising one of the private exponents (a, b) by computing the discrete log of the corresponding public value (g a mod p, g b mod p).

Once the private exponents are known via the massive compute power,  getting hold of shared key is not a problem as it is well known technique used to compute the shared secret.

My initial reading was that the large amount of computation power is required to get hold of private exponents for every DH operation.  Apparently,  that is not true.  Once a particular shared secret is retrieved, any further DH exchanges can be broken with smaller compute power.  That is, some part of intermediate computational results can be reused.  That way,  any attacker need to invest only one time on massive computational power.  Then rest of DH exchanges can be broken with little compute power as long as same prime is used in new DH exchanges.

Now that this news is out,  this may be done by malicious entities. That is actually more concerning.

What are alternative solutions?

- Use EC version of DH.
- Create new primes for each tunnel and exchange the prime number.

Any other solutions?


Wednesday, December 2, 2015

Diffie Hellman - So well Explained here

See this link:

Pasting relevant text here:


Every cipher we have worked with up to this point has been what is called a symmetric key cipher, in that the key with which you encipher a plaintext message is the same as the key with which you decipher a ciphertext message. As we have discussed from time to time, this leads to several problems. One of these is that, somehow, two people who want to use such a system must privately and secretly agree on a secret key. This is quite difficult if they are a long distance apart (it requires either a trusted courier or an expensive trip), and is wholly impractical if there is a whole network of people (for example, an army) who need to communicate. Even the sophisticated Enigma machine required secret keys. In fact, it was exactly the key distribution problem that led to the initial successful attacks on the Enigma machine.
However, in the late 1970's, several people came up with a remarkable new way to solve the key-distribution problem. This allows two people to publicly exchange information that leads to a shared secret without anyone else being able to figure out the secret. TheDiffie-Hellman key exchange is based on some math that you may not have seen before. Thus, before we get to the code, we discuss the necessary mathematical background.

Prime Numbers and Modular Arithmetic

Recall that a prime number is an integer (a whole number) that has as its only factors 1 and itself (for example, 2, 17, 23, and 127 are prime). We'll be working a lot with prime numbers, since they have some special properties associated with them.
Modular arithmetic is basically doing addition (and other operations) not on a line, as you usually do, but on a circle -- the values "wrap around", always staying less than a fixed number called the modulus.
To find, for example, 39 modulo 7, you simply calculate 39/7 (= 5 4/7) and take the remainder. In this case, 7 divides into 39 with a remainder of 4. Thus, 39 modulo 7 = 4. Note that the remainder (when dividing by 7) is always less than 7. Thus, the values "wrap around," as you can see below:
0 mod 7=06 mod 7=6
1 mod 7=17 mod 7=0
2 mod 7=28 mod 7=1
3 mod 7=39 mod 7=2
4 mod 7=410 mod 7=3
5 mod 7=5
To do modular addition, you first add the two numbers normally, then divide by the modulus and take the remainder. Thus, (17+20) mod 7 = (37) mod 7 = 2.
Modular arithmetic is not unfamiliar to you; you've used it before when you want to calculate, for example, when you would have to get up in the morning if you want to get a certain number of hours of sleep. Say you're planning to go to bed at 10 PM and want to get 8 hours of sleep. To figure out when to set your alarm for, you count, starting at 10, the hours until midnight (in this case, two). At midnight (12), you reset to zero (you "wrap around" to 0) and keep counting until your total is 8. The result is 6 AM. What you just did is to solve (10+8) mod 12. As long as you don't want to sleep for more than 12 hours, you'll get the right answer using this technique. What happens if you slept more than 12 hours?


Here are some exercises for you to practice modular arithmetic on.
  1. 12+18(mod 9) Answer
  2. 3*7(mod 11) Answer
  3. (103 (mod 17))*(42 (mod 17)) (mod 17) Answer
  4. 103*42 (mod 17) Answer
  5. 72 (mod 13) Answer
  6. 73 (mod 13) Answer
  7. 74 (mod 13) Answer
  8. 75 (mod 13) Answer
  9. 76 (mod 13) Answer
Did you notice something funny about the last 5 exercises? While, usually, when we take powers of numbers, the answer gets systematically bigger and bigger, using modular arithmetic has the effect of scrambling the answers. This is, as you may guess, useful for cryptography!

Diffie-Hellman Key Exchange

The premise of the Diffie-Hellman key exchange is that two people, Alice and Bob, want to come up with a shared secret number. However, they're limited to using an insecure telephone line that their adversary, Eve (an eavesdropper), is sure to be listening to. Alice and Bob may use this secret number as their key to a Vigenere cipher, or as their key to some other cipher. If Eve gets the key, then she'll be able to read all of Alice and Bob's correspondence effortlessly. So, what are Alice and Bob to do? The amazing thing is that, using prime numbers and modular arithmetic, Alice and Bob can share their secret, right under Eve's nose! Here's how the key exchange works.
  1. Alice and Bob agree, publicly, on a prime number P, and a base number N. Eve will know these two numbers, and it won't matter!
  2. Alice chooses a number A, which we'll call her "secret exponent." She keeps A secret from everyone, including Bob. Bob, likewise, chooses his "secret exponent" B, which he keeps secret from everyone, including Alice (for subtle reasons, both A and B should be relatively prime to N; that is, A should have no common factors with N, and neither should B).
  3. Then, Alice computes the number
    J = NA (mod P)
    and sends J to Bob. Similarly, Bob computes the number
    K = NB (mod P)
    and sends K to Alice. Note that Eve now has both J and K in her possession.
  4. The final mathematical trick is that Alice now takes K, the number she got from Bob, and computes
    KA(mod P).
    Bob does the same step in his own way, computing

    JB (mod P).
    The number they get is the same! Why is this so? Well, remember that K = NB (mod P) and Alice computed KA (mod P) = (NB)A (mod P) = NBA (mod P). Also, Bob used J = NA (mod P), and computed JB (mod P) = (NA)B (mod P) = NAB (mod P).
    Thus, without ever knowing Bob's secret exponent, B, Alice was able to compute NAB (mod P). With this number as a key, Alice and Bob can now start communicating privately using some other cipher.

    Why Diffie-Hellman Works

    At this point, you may be asking, "Why can't Eve break this?" This is indeed, a good question. Eve knows N, P, J, and K. Why can't she find A, B, or, most importantly, NAB(mod P)? Isn't there some sort of inverse process by which Eve can recover A from NA(mod P)?
    Well, the thing Eve would most like to do, that is, take the logarithm (base N) of J, to get A, is confounded by the fact that Alice and Bob have done all of their math modulo P. The problem of finding A, given N, P, and NA (mod P) is called the discrete logarithm problem. As of now, there is no fast way known to do this, especially as P gets really large. One way for Eve to solve this is to make a table of all of the powers of N modulo P. However, Eve's table will have (P-1) entries in it. Thus, if P is enormous (say 100digits long), the table Eve would have to make would have more entries in it than the number of atoms in the universe! Simply storing that table would be impossible, not to mention searching through it to find a particular number. Thus, if P is sufficiently large, Eve doesn't have a good way to recover Alice and Bob's secret.
    "That's fine," you counter, "but if P is so huge, how in the world are Alice and Bob going to compute powers of numbers that big?" This is, again, a valid question. Certainly, raising a 100-digit-long number to the power 138239, for example, will produce a ridiculously large number. This is true, but since Alice and Bob are working modulo P, there is a shortcut called the repeated squaring method.
    To illustrate the method, we'll use small numbers (it works the same for larger numbers, but it requires more paper to print!). Say we want to compute 729 (mod 17). It's actually possible to do this on a simple four-function calculator. Certainly, 729 is too large for the calculator to handle by itself, so we need to break the problem down into more manageable chunks. First, break the exponent (29) into a sum of powers of two. That is,

    29 = 16 + 8 + 4 + 1 = 24 + 23 + 22 + 20

    (all we're doing here is writing 29 in binary: 11101). Now, make a list of the repeated squares of the base (7) modulo 17:
    71 (mod 17) = 7
    72 (mod 17) = 49 (mod 17) = 15
    74 (mod 17) = 72 * 72 (mod 17) = 15 * 15 (mod 17) = 4
    78 (mod 17) = 74 * 74 (mod 17) = 4*4 (mod 17) = 16
    716 (mod 17) = 78 * 78 (mod 17) = 16*16 (mod 17) = 1
    Then, 729 (mod 17) = 716 * 78 * 74 * 71 (mod 17) = 1 * 16 * 4 * 7 (mod 17) = 448 (mod 17) = 6.
    The neat thing is that the numbers in this whole process never got bigger than 162 = 256 (except for the last step; if those numbers get too big, you can reduce mod 17 again before multiplying). Thus, even though P may be huge, Alice's and Bob's computers don't need to deal with numbers bigger than (P-1)2 at any point.

Intel Memory Protection Technology - Some informational links

Definition is taken from wikipedia ( :

"Intel MPX (Memory Protection Extensions) is a set of extensions to the x86 instruction set architecture. With compilerruntime library and operating system support, Intel MPX brings increased security to software by checking pointer references whose normal compile-time intentions are maliciously exploited at runtime due to buffer overflows."

This is indeed great.  Many complicated problems that developer spend lot of time in debugging are related to  memory overwrites/underwrites,   leaks,  accessing freed memory,  Stack overwrites etc.. 

Intel MPX, at its current technology solves overwrites/underwrites issues.   Any user space application compiled/linked using -fmpx should would detect issues. 

It is not only useful for debugging, but also helps in preventing some types of malware injections. 

Since the checks are done in the hardware, there would not be much performance penalty and hence can be used even in production environments. 

From developer perspective too, most of the work is part of the compiler (GCC) and a library.  Hence, there is almost no additional effort required from the developers.

Some informational links are here:  :  MPX integration in Linux Kernel.

Sunday, November 15, 2015

DPDK based Accelerated vSwitch Performance analysis

DPDK based OVS acceleration improves the performance many folds up to 17x times over native OVS.   Please see the link here :

It is interesting to know following :
  • Performance impact with increasing number of flows.
  • Performance impact when additional functions introduced.
  • Performance impact when VMs are involved.

Performance report in has measured values with many variations.  To simplify the analysis, I am taking measured values with one core and hyper-threading disabled.  Also, I am taking numbers with 64 byte packets as PPS number is important for many networking workloads.

Performance impact with increasing number of flows:

Scenario :  Phy-OVS-Phy (Packets received from Ethernet ports are processed by the host Linux OVS and then sent out.  There is no VM involvement.  Following snippet is taken from Table 7-12.

One can observe from above table is that performance numbers are going  down with increasing number of flows, rather, dramatically.  Performance went down by 35.8% when the flows are increased from 4 to 4K. Performance went down  by 45% when the flows are increased from 4 to 8K.
Cloud and telco deployments typically can have large number of flows due to SFC and per-port filtering functionality.  It is quite possible that one might have 128K number of flows.  One thing to note is that the performance degradation is not very high from 4K to 8K. That is understandable that cache thrashing effect is constant after some number of simultaneous flows.  I am guessing that performance degradation with 128K flows would be around 50%.

Performance impact when additional functions introduced

Many deployments are going to have not only VLAN based switching, but also other functions such as VxLAN,  Secured VxLAN (VxLAN-o-Ipsec) and packet filtering (implemented using IPTables in case of KVM/QEMU based hypervisor).  There are no performance numbers in the performance report with VxLAN-o-Ipsec and IPTables, but it has performance numbers with VxLAN based networking.

Following snippet was taken from Table 7-26.  This tables shows PPS (packets per second) numbers when OVS DPDK is used with VxLAN.  Also, these numbers are with one core and hyper-threading disabled.

Without VxLAN,  performance numbers with 4 flows is 16,766,108 PPS.  With VxLAN, PPS number dropped to 7.347,179, a 56% degradation.

There are no numbers in the report for 4K and 8K flows with VxLAN.  That number would have helped to understand the performance degradation with both increasing number of flows & with additional functions.

Performance impact when VMs are involved

Table 7-12 of the performance report shows the PPS values with no VM involved.  Following snippet shows the performance measured when the VM is involved.  Packet flows through Phy-OVS-VM-OVS-Phy components.  Note that VM is running on a separate core and hence VM-level packet processing should not impact the performance.

Table 7-17 shows the PPS value for 64 byte packets with 4 flows.  PPS number is 4,796,202
PPS number with 4 flows without VM involvement is 16,766,108 (as shown in Table 7-12).
Performance degradation is around :  71%.

PPS value with 4K flows with VM involved (from Table 7- 21):  3,069,777
PPS value with 4K flows with no VM involved (from Table 7-12) : 10,781,154
Performance degradation is around : 71%.

Let us see the performance impact with the combination of VM involvement & increasing number of flows -

PPS number with 4 flows without VM involvement is 16,766,108 (as shown in Table 7-12).
PPS numbers with 4K flows with VM involvement is 3,069,777 (from Table 7-21)
Performance degradation is around :  81%.


  • Performance degradation is steep when the packets are handed over to the VM. One big reason being that the packet is traversed through the DPDK accelerated OVS twice unlike Phy-OVS-Phy case where the OVS sees the packet only once.  Second biggest possible culprit is   the virtio-Ethernet (vhost-user) processing in the host.
  • Performance degradation is also steep when number of flows are increased. I guess cache thrashing is the culprit here. 
  • Performance degradation when more functions are used.

Having said that,  main thing to note is that,  DPDK based OVS acceleration in all cases is showing almost 7 to 17x improvement over native OVS.  That is very impressive indeed.


Thursday, November 12, 2015

OF Actions using eBPF programs

I was just thinking about this and searched in Internet and I found that this is already implemented and part of OVS.  Please see the series of patches here:

Also, if you want to look at the patched OVS source code,  check this Linux distribution :

You can check the net/openvswitch directory to understand BPF changes.

Essentially, above patches are addressing one of the concerns of OF critics.  OF specifications today define actions.  Any new action requirement has to go through the standardization process in ONF, which can take few months, if not years. Many companies, like NICIRA, Freescale defined their own proprietary extensions.  But the acceptability is low as these are not part of the OF specifications. Now with BPF integration in the Kernel OVS data path,  one can define proprietary actions as BPF set of BPF programs.   As long as this generic BPF program action is standardized in OF specifications,  then new actions can be implemented as BPF programs without having to go through standardization process.

As of now, Linux provides hooking the BPF programs to three areas -  system call level,  CLS scheduling level and at the "dev" layer.   Once above patches are accepted in Linux Kernel, OVS datapath becomes fourth area where BPF programs can be associated.

I am hoping that, BPF programs can be executed not only in the Kernel, but also in DPDK accelerated OVS as well as smart-NIC (FPGA, Network processors, AIOP etc..).


virtio based flow API


OPNFV DPACC team is trying to standardize the flow API, that would be called from VNFs to talk to flow processing modules in VMM or smart-NIC.

One discussion group was started.  You are welcome to join this group here:

Why flow flow API from VNFs :  NFV success depends on the performance VNFs can achieve. Many are feeling that NFV flexibility is great, but to achieve similar performance as physical appliances, one needs to have 2x to 10x hardware.  Even with lot more hardware, one can achieve throughput, but issues remain in achieving low packet latency and low packet jitter.  Since, NFV uses generic hardware, cost may not go up (in comparison with physical appliances), but the electricity charges related to powering more hardware and cooling would go up.  Also, space requirements also go up which can add to the cost of operations.

OVS acceleration using DPDK and smart-NICs are addressing this issue by accelerating or offloading virtual switching functionality.  1/2 of the problem is take care by this. Another 1/2 of the problem is packet processing by VNFs.  There are many VNFs, that follow CP-DP model where DP can be offloaded. Many smart-NIC solution want to take care of that too.  But lack of standards is dampening these efforts.  In NFV environments, vNFs and smart-NICs are coming from two different parties.  VNFs can be placed in many servers with smart-NICs  of various vendors. VNF vendors would not like to have drivers for various smart-NICs.  Also, smart-NIC vendors would not like to provide drivers for various VNF operating systems and their versions.  Operators, who are procuring these also would not want to worry about any inter-dependencies. That is where, standardization helps.

Another bigger issue in NFV markets is that type of VNFs that are going to be placed on a given server.  The type of VNFs on a server  keep changing with the time.  Since the type of VNFs are not known a priori,

  • One way to address is to provide all kinds of fast path functions in smart-NICs. That is very complicated, not only with respect to number of functions, but also with respect to keeping it up with newer VNF and related fast path functions. 
  • Second way is to let the VNFs install their own fast paths in smart-NIC.  That is also challenging as there are multiple  smart-NIC vendors. It is a complexity for vNF vendors to provide fast path functions for multiple  smart-NIC vendors.  It is also a complexity for smart-NIC vendors to ensure that malicious VNF fast path does not hinder its operations.  Few people are exploring the ways to use eBPF based fast path functions.  Since eBPF comes with some kind of verification of program to ensure that it does not hinder other operations,  many think that this could be one approach.  
  • Third way is to define generic pipeline of packet stages with dynamic flow programming. Similar to Openflow, but with reduced complexity of Openflow.  Smart-NIC vendors only need to provide flow processing entity.  All the intelligence of using flow processing entity is up to the VNFs.  VNFs don't push any software entity in smart-NIC.  With this mechanism, smart-NIC can be a network processors, AIOP,  FPGA etc.. 

Above discussion forum is trying to create generic flow API that can be used by VNFs to program tables, flows with actions, objects etc.. 

Please join the group and let us discuss technical items there. Once we have some clarity, then we intend to present this to OPNFV DPACC forum.


Wednesday, October 21, 2015

Security in Hybrid Cloud sans Private Cloud

Check this link :

Few predictions made before are becoming reality.  Pure private clouds are disappearing slowly. Enterprises are increasingly using public clouds for may workloads and going for very small private clouds for critical workloads.  Combination of public cloud hosting with private cloud is called hybrid cloud.

I believe that hybrid cloud market as defined today (Private + Public combination) would decline over time and would become niche market.  But another other kind of hybrid cloud market, where Enterprises use multiple public clouds, would increase in future.

Security considerations :  In my view,  Enterprises need to  embed security in their workloads and not depend on generic security solutions provided by cloud operators.  Few reasons on why this is required.

  • Enterprises may need to host their services in various countries, where there may not be stringent laws on data protection,  data security.   
  • Enterprises may not like to depend on the integrity of administrators of Cloud operators.
  • Enterprises may not like Cloud operators to share the data & security keys to governments without their consent 
What it means is that :
  • Enterprises would need to consider hypervisor domain as insecure, at least for data.
What is it Enterprises would do in future :
  • Security will be built within the workloads (VMs)
    • Threat Security such as firewall, IPS, WAF.
    • Transport level data security such as SSL/TLS.
    • Network Level Security such as Ipsec, OpenVPN
  • Visibility would be built into the virtual machines for 
    • Performance visibility
    • Traffic visibility
    • Flow visibility
Essentially, virtual machines would have all necessary security and visibility agents built into them. Centralized management systems, controlled by Enterprises,  will now configure these agents from a central location to make the configuration & management simpler.

There is a concern that if security is built into the VMs, then attacker exploiting the applications in the VMs may be able to disable the built-in security functions, falsify the data or send wrong information to analytic engines. 

That is a valid concern.  I believe that containers would help in mitigating those concerns.
  • Run all security functions in the root container of  the VM.
  • Run applications in non-root containers within the VM
Isolation provided by containers can mitigate the challenges associated with combining security with the applications.

Service Chaining :  Traditionally, multiple security services are applied by middle virtual appliances. If the traffic is encrypted end-to-end,  these middle virtual appliances will not be able to do good job. Yes, that is true.  This can be solved by Cloud-SFC (SFFs within the virtual machines) where VM SFF itself steer the traffic to various middle appliances or container services within the VM.  More later on this...

I believe that with increasing popularity, flexibility, scale-out, performance provided by CSPs,  it is just matter of time where private clouds would disappear or decline dramatically.  Cloud users (Enterprises) would go for inbuilt security within VMs to host them in public clouds and security/visibility companies would have to address this trend. In  my view only those security/visibility companies would survive.  May be dramatic?  What do you think?

Tuesday, October 6, 2015

Great and informative post about Micro-Service Architecture

Please see this post in techtarget -

It is logical to think that micro-service architecture requires store-front service which hides all the complexities of various micro services.  As the post says, it is also required multiple store-front services need to be chained externally.  By breaking the architecture into this two dimension solution, it provides flexibility and reduction in complexity.

Since SFC kind of solutions can be used to chain multiple store-front services , scale-out, reliability are taken care by SFC solutions.

But the services hidden by the store-front services need to have similar scale-out and high availability properties.  It means that store-front services need to be not only be providing a service, but also work as some kind of orchestrator of services working with VIM (virtual infrastructure manager) such as NOVA and Magnum to bring-up/bring-down micro-services they are front-ending.

It did not cross my mind that service VMs or service containers could be interacting with the schedulers. It certainly makes sense keeping this kind of architecture.

Great post from Tom Nolle on TechTarget.  Really enjoyed reading it.

I wonder what kind of security requirements that would prop-up in this kind of solution architecture.  I am guessing that security within the service would be a simpler solution.


Monday, October 5, 2015

Openflow based Local fast path for distributed Security

OPNFV DPACC working group is trying to standardize the accelerator interface for vNFs to accelerate the workloads.

Accelerator standardization is required to ensure that  the virtual appliances can work on different types of compute nodes - Compute nodes having no acceleration hardware,  compute nodes having accelerator hardware from vendor 1,  compute nodes having accelerator hardware from vendor 2 etc..
Above becomes critical in cloud environments as compute node hardware is procured independent of procurement of virtual appliances.  Virtual appliance vendors like to have their virtual appliances' images work on multiple compute nodes.  Operators likes to ensure that any virtual appliances they purchase continue to work with future compute node hardware.

OPNFV DPACC team goal is to identify  various acceleration types. And then define software based interface for each acceleration type.

VMM which is typically bound to the compute node hardware is expected to have conversion module from the software interface to local hardware interface.

OPNFV DPACC team is choosing virtio and vring transport to implement software interface.

Accelerations normally fall into two categories -  Look-aside model and Inline/fast-path model.  In Look-aside model,  virtual appliance software sends command to the accelerator and expect response for each command.  The response can come back to virtual appliance in synchronous fashion or asynchronous fashion.  Inline/fast-path model typically is for packet processing kind of vNFs.  Any network function that does not need to inspect all the packets can take advantage of inline/fast-path acceleration model.

In inline/fast-path acceleration model,  network function in virtual appliance establishes a session state (either pro-actively or re-actively to the packets) and then it expects the fast-path accelerator to process further packets.

Many smart-NIC vendors provide facilities to their customers to create fast-path functions in the smart-NIC.  In physical appliances, typically this works best.  Physical appliances are fixed functions and the entire software and hardware comes from one vendor.  This vendor would ensure that both normal network function and fast-path functions work together.

In virtual world,  smart-NIC comes from one vendor and virtual appliances come from separate vendors.  Also, it is unknown at smart-NIC build time, the network functions that would be hosted on the compute node.  It may seem that smart-NIC vendors need to have all types of fast-path functions implemented in the smart-NIC.  It is obvious that it is not feasible and the smart-NIC may not have too much of code space to put all kinds of fast-path functions.  Note that, smart-NIC could be based on FPGA or constrained network processor.

Another model could be that smart-NIC is populated dynamically with fast-path functions based on the type of virtual appliances are being brought up on that node.  This also could be a problematic as there could be multiple smart-NIC vendors using various processors/network-processors/FPGA etc..  Hence, one may need to create similar fast path functions for many smart-NIC types.  There is always security & reliability issues as these functions may not co-exist s they come from various vendors.  Some function may misbehave or some functions might crash other functions. In addition, there is always a concern on amount of time it adds to bringing up and bringing down the virtual appliance.

Hence, some in OPNFV DPACC team believe that smart-NIC must implement flow processor such as openflow.  Openflow has very good advantages, Some of them are :

  • Only one logic module.
  • One could create multiple instances.
  • Each instance can be controlled by separate entity.
  • Multiple tables in each instance.
  • Flows in the table can be pipelined.
  • Flows can be programmed with pre-defined actions.

If smart-NIC implements OF,  orchestration module can create an instance for each vNF and assign the instance ownership for that vNF.  vNF at run time can program the flows with appropriate actions to create the fast path for those flows.  

It is already proven in Industry that OF and some proprietary extensions can be used to realize IPSec,  Firewall, NAT, forwarding,  SLB fast paths.  

Please see this link on the presentation slides made to OPNFV DPACC :  This slide deck provides graphical picture of above description.

Friday, October 2, 2015

Configuration relay feature in Open-SFC - Even though Open-SFC project is for "Service Function Chaining", there is one feature called "Configuration relay" which is  very useful generic feature. 

Openstack neutron advanced services project provides configuration support for few network services.  VPN-as-a-Service,  Firewall-as-a-Service and LB-as-a-Service are few examples.  These services provide RESTFul API for IPSec VPN,  Stateful firewall and Load balancers.  These services also follow similar “plugins” and “agents” paradigm.  Plugins implement the RESTful API and store the configuration in the database.  Also,  these plugin send the configuration to the Agents.   Agents, today run in the "Network Nodes", which receives the configuration from plugin and configure local services such as Strongswan,  IP Tables and HA proxy.  Here, network nodes are reachable from the Openstack controller and hence plugin drivers and agents can communicate with each other (via AMQP).

In recent past, many network services are being implemented as vNFs.  With distributed security and end-to-end security becoming norm,  network security services (such as firewall and IPSec VPN) are embedded within the application VMs.  In these cases, advanced-network-service agents need to be run in these virtual machines.   But, there is an issue of communication reachability between plugin drivers and agents.  Virtual machines are on the data network and controller nodes are in the management network.  For isolation/security reasons,  virtual machines are normally not allowed to send/receive traffic from the management network directly.

Configuration relay is expected to mitigate this issue.  Configuration relay is expected to run in each compute node in the VMM/Hypervisor.  Since VMM is reachable from the controllers,  this relay in the VMM becomes conduit (just for configuration) between network service plugin drivers with the agents in the local virtual machines.

I will post more information on how this relay works technically. But following repositories/directories have source code.

FSL_NOVA_SerialPort_Patch in is patch to the nova portion of the compute node – This patch allows creation of virtio-serial port (to allow communication between local virtual machine and configuration relay) via libvirtd and QEMU. and in is a small service in compute that enabled configuration relay.

Example program in the vNF that communicates with relay to get hold of configuration :  (Based on comment posted by Srikanth)



Friday, August 21, 2015

Some informational links on HSM

Some good information/links on HSM and related technologies

Safenet Luna, which is one of the popular HSMs in the market :

Cloud HSM (Based on Safenet Luna) :

Very good blog entry on mapping OpenSSL with HSMs using OpenSSL PKCS11 engine interface :  More details on actual steps with an example can be found here:

One more resource describing the OpenSSL integration :  One more place to get the same document:

PKCS11 standard :

Openstack Barbican provides key management functionality  and it can be enhanced to use HSM internally. More informationc an be found at :

Saturday, July 11, 2015

Hyperscale providers developing their own processors?

With increasing migration to the public clouds,  it appears that 70% of all Enterprise workloads would run in the public clouds.  90% of those workloads expected to be run in Amazon, Google Compute and Microsoft Azure public clouds.

With so much of processing power centralized,  it may seem natural for these companies to look into developing their own processors tuned to their workloads.

Many articles talked about rumors about these companies having strategy to develop processors.

Some interesting articles I could find are :

Above two articles seem to suggest Amazon with decent hiring of Engineers from failed ARM server startup company (Calxeda) is designing their own ARM server chips.

This old article seems to suggest that Google is also thinking of designing their own server chips.

What do you think?  When would this happen in your view.

My take is that it would be a while before they decide to build their own chips.  It may be possible that some specialized chips may be designed by them for specific work loads.  But,  creating a general purpose processor could be a challenging task. 

Saturday, April 18, 2015

AWS and Networking

This link (Presentation by AWS Distinguished Engineer) provides insight into "networking" being the pain point.

Some interest things from this post :

So, the answer is that AWS probably has somewhere between 2.8 million and 5.6 million servers across its infrastructure.
Networking is a red alert situation for us right now,” explained Hamilton. “The cost of networking is escalating relative to the cost of all other equipment. It is Anti-Moore. All of our gear is going down in cost, and we are dropping prices, and networking is going the wrong way. That is a super-big problem, and I like to look out a few years, and I am seeing that the size of the networking problem is getting worse constantly. At the same time that networking is going Anti-Moore, the ratio of networking to compute is going up.”

More importantly,this seems to contradict with one of my projections  "Death of SRIOV".

Now, let’s dive into a rack and drill down into a server and its virtualized network interface card. The network interface cards support Single Root I/O Virtualization (SR-IOV), which is an extension to the PCI-Express protocol that allows the resources on a physical network device to be virtualized. SR-IOV gets around the normal software stack running in the operating system and its network drivers and the hypervisor layer that they sit on. It takes milliseconds to wade down through this software from the application to the network card. It only takes microseconds to get through the network card itself, and it takes nanoseconds to traverse the light pipes out to another network interface in another server. “This is another way of saying that the only thing that matters is the software latency at either end,” explained Hamilton. SR-IOV is much lighter weight and gives each guest partition on a virtual machine its own virtual network interface card, which rides on the physical card.
It seems that AWS started to use SR-IOV based Ethernet cards.

I guess it is possible for AWS as it also provides AMIs with inbuilt drivers.

What I am not sure is that how they are doing some of the functions - IP filtering, Overlays and traffic probing and analytics without having to see the traffic.

Currently, AWS seems to be using Intel IXGBE drivers, which means that Intel NICs are being used here.  See this :

Amazon bought Annapurna in 2015. I wonder how that plays a role.  Could it be that AWS folks found issues as detailed in here and hence they are going after programmable smart-NICs?

Food for thought...