Hello!
Just shortly, I was trying BIRD in QEMUs connected via OVS bridges
several years ago. It was even worse, I went to some segfaults in OVS.
(A week later, one of my friends told me that it was an embargoed bug.)
I didn't try any more, it wasn't feasible.
I'd like to promise that I'd look into it, anyway I'm busy a lot and
I'll surely forget your issue in ten minutes. If you don't hear me until
next Monday, please ping me.
Maria
On 5/6/19 9:33 PM, Saso Tavcar wrote:
The best solution would be a good OVS routing table patch as quoted.
Maybe BIRD developers can help, since they are native C developers.
We also tried bird on native (K)VM network interfaces. Since they are
some kind of SW
emulation too, we hit on unrecoverable network IRQ problems, thus
overloaded OVS is
still better solution for us.
Regards,
saso
On 6 May 2019, at 21:01, Kees Meijs <[email protected]
<mailto:[email protected]>> wrote:
Hi Saso,
Thank you very much. OVS is new in the mix (we're not replacing Quagga
alone) as well. Obviously we didn't expect this to happen.
I'll see if patching OVS in Debian in a similar way works for us or if
another approach fits better (i.e. maybe not using OVS at all).
If you'll know of a better more upgrade-and-maintainance-proof
solution I would welcome more information.
Regards,
Kees
On 06-05-19 20:40, Saso Tavcar wrote:
this is an OVS issue, already discussed:
https://mail.openvswitch.org/pipermail/ovs-discuss/2016-November/043007.html
...
_https://mail.openvswitch.org/pipermail/ovs-discuss/2016-November/043063.html_
_
_
Official OVS quote:
>/We'd accept patches to improve OVS's routing table code. It's not />/designed to scale
to 1,800,000 routes. We'd also take code to suppress />/the routing table code in cases
where it isn't actually needed, since />/it's not always needed. But we can't take a patch
to just delete it; />/I'm sure you understand./
I tried to apply this patch at that time, but was already useless for
newer versions:
_https://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20161123/5379b333/attachment.bin_
Our workaround was to scale VM with 3 vCPU-s, since our average
system load is 1.5 for BGP.
You can see what is happening: