Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VPP-836] VPP crashes when large amount of ACL created #2276

Closed
vvalderrv opened this issue Feb 1, 2025 · 2 comments
Closed

[VPP-836] VPP crashes when large amount of ACL created #2276

vvalderrv opened this issue Feb 1, 2025 · 2 comments

Comments

@vvalderrv
Copy link
Contributor

Description

We see VPP crashes we create a large amount of ACLs created. We are using Openstack (VPP agent) to create a large amount of security group which internal translate to VPP ACL.

By working with John DeNisco and Andrew Yourtchenko, they are able to identify it is a memory leak in the ACL plugin.

Assignee

Andrew Yourtchenko

Reporter

Kahou Lei

Comments

  • jdenisco (Mon, 8 May 2017 19:37:12 +0000): Corrective Actions email chain:

On 6 May 2017, at 15:20, Dave Barach (dbarach) <[email protected]> wrote:

Dear Andrew,

 

We should consider these four corrective actions:  

 

- Transition Metacloud to 17.04.




<li>

	- What is the plan?
	- We are not staffed to support historical releases in perpetuity.
	- Let’s be explicit with the Metacloud team on this subject. They’re continuously tripping over old / fixed bugs.
	- We need to improve our internal process so that critical bugfixes are <ins>reliably</ins> cherry-picked.  

</li>

 

Yeah we should chat with Jerome and Enron on this. The way we do releases to day, frankly makes just about zero sense to me - we are stamping the release and then saying it is dead just a few months after it. Unless we have a third party maintain the equivalent of a "stable train" fork which they would maintain themselves (e.g. In this case essentially making 1701 a long-term release), I think we should just stamp the master with labels, and maintain private forks off those labels, that would be much more reflecting of the current reality.

 

 

 

- 
- Improve the L2 mac address statistics





<li>

	- Do not allocate 02:00:00:00:00:01, 02:00:00:00:00:02, or similar.
	- Applicable only if the statistics currently suck

</li>

 

That doesn't apply since the entries in question are for IP connections. I don't alloc for mcast stuff unless the ACL is configured for this. And again, the issue in question (assuming I didn't misdiagnose it of course) is not a leak. It is a cycle of requests 2x the previous cycle in a tight loop of adding a single entry. Notice that the other table in the pair has exactly the same connexions except with src/dest swapped, and the memory usage there is drastically less. 

 

 

- 
- Backport the ACL plugin change <span class="error">[switch to bihash]</span> to 17.01

 

Yeah I am happy to do that - we need to discuss what Emran's/Jerome view is. I think it is more a layer 9 issue more than anything, but then also seems like e.g. CSIT is wonky on 1701. ( at least as far as I saw with the pretty print gerrit, it consistenly bonks out on CSIT with a fairly cryptic error - so some technical effort might be needed. I saw your "recheck", it seems to be a consistent error on CSIT... (why did it stop working ? Did we "clean up" something in CSIT ?)

 

Emran/Jerome - think it might make sense to have a call on Monday so we discuss what the strategy is here ? (As well as for any other real-world customer, frankly)

 

 

- Dust off the 4368.

 

Independent of the above: 4368 should be rebased, tested carefully, and merged into master only. I would prefer that it not go into 17.01 or 17.04, to avoid making more trouble.

 

Yeah, even if for master or 1704 it doesn't matter unless someone starts to (ab?) use classifier in a dynamic fashion. So this is the code that won't get exercised other than the unit tests. That is a bad thing (tm) in my book. But again maybe I am wrong.

 

 

Please remember: it is not the committer community’s responsibility to drive code reviews to completion. There has been no activity on this Gerrit since Dave Wallace rebased it on 2/1.  

 

When we discussed it, you had me finally almost convinced it ain't gonna happen - so when I heard metacloud weren't using security groups, again using the same principle of the least risk I didn't push it too much, also after hearing that the support strategy for 1701 is "1704 is around the corner, they must switch", both in tête a tête discussion and when we had a meeting in Paris. Of course in the future I will be more "pessimistic" aka "believing" with my assessments when they contradict my gut

 

So, in line with my previous sentence - if we are pulling in the 4368 into master then we will need to also revisit the whole "pathological hash" discussion, because as of present even with 4368 fix in place it will be possible to create arbitrary length linear list lookups in the datapath as based on traffic. What do you think ? 

 

--a

 

 

 

Thanks… Dave

 

 

**From:** Andrew Yourtchenko (ayourtch)

Sent: Friday, May 5, 2017 7:00 PM

To: Naveen Joy (najoy) <[email protected]>

Cc: John DeNisco (jdenisco) <[email protected]>; Dave Barach (dbarach) <[email protected]>; Ian Wells (iawells) <[email protected]>

Subject: Re: Out of heap in the ACL plugin

 

Dave,

 

Long story short - I think we are witnessing the issue which gerrit 4368 was supposed to address but seems it never made it to the code...

 

The 1704+ I moved to using a bihash so this degenerated hashing case is not a problem anymore.

 

Longer version with proofs and a good bit of fun gdb stuff - http://stdio.be/vpp/t/debugging_vpp.pdf

 

Sorry for the mildly stupid format, but since it was webex remote access and I was making screenshots from my phone, it's about 100mb, and I want to be gentle to your inboxes.

 

John,

 

As we discussed, Mon/tue i am on PTO traveling, but should be on jabber - so please ping me once you decide on the timeline for the call with Dave. I will keep my eye on email, but not realtime.

 

Have a nice weekend.

 

--a

On 5 May 2017, at 20:53, Naveen Joy (najoy) <[email protected]> wrote:

Hi John,

 

If debug logging level is enabled, the vpp-agent log file should give you some pointers around what API calls it was making around the time of the crash.

I have made the logging as detailed as possible in the debug level.

 

Thanks,

Naveen

 

**From:** "John DeNisco (jdenisco)" <[email protected]>

Date: Friday, May 5, 2017 at 7:26 AM

To: "Andrew Yourtchenko (ayourtch)" <[email protected]>

Cc: "Dave Barach (dbarach)" <[email protected]>, "Ian Wells (iawells)" <[email protected]>, "Naveen Joy (najoy)" <[email protected]>

Subject: Re: Out of heap in the ACL plugin

 

 

Ian,

Naveen,

 

Andrew is on the trail of figuring out what is causing the latest VPP crash at Metacloud. He asked if it is possible to get a trace of the api calls to VPP with relation to the ACLs on a running system?

 

Dave,

 

Is there a way to monitor VPP memory usage on a running system?

 

Thanks,

 

John

 

 

**From:** "Andrew Yourtchenko (ayourtch)" <[email protected]>

Date: Friday, May 5, 2017 at 10:02 AM

To: John DeNisco <[email protected]>

Cc: "Dave Barach (dbarach)" <[email protected]>

Subject: Re: Out of heap in the ACL plugin

 

John,

 

As we discussed on IM - could be probably the reflexive connections not getting cleaned up. I thought they didn't use the stateful ACL (action=2) ?

 

Getting rid of stateful rules will be an effective workaround, of course.

 

As for more meaningful actions:

 

periodically getting "show l2sess count" debug CLI output and monitoring the session pool size behavior over time will allow to confirm or deny this theory above.

 

Also, since stateful sessions in 1701 are done using usual classifier tables, keeping an eye on their size in the same fashion can help understand the behavior too.

--a

On 5 May 2017, at 15:13, John DeNisco (jdenisco) <[email protected]> wrote:

 

Hi Andrew,

 

Dave suggested I contact you regarding the following crash we are seeing with Metacloud.

 

The backtrace is below.

 

This is 17.01.1 with a couple of unrelated patches.

 

I will get more information from the API trace shortly, but do you have any ideas of what might be causing this.

 

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib64/libthread_db.so.1".

Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.

Program terminated with signal 6, Aborted.

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

Missing separate debuginfos, use: debuginfo-install vpp-17.01.2-release.x86_64

(gdb) bt

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

#1  0x00007f811d43dce8 in abort () from /lib64/libc.so.6

#2  0x000000000059ae0e in os_panic ()

#3  0x00007f811e332e35 in vec_resize_allocate_memory () from /lib64/libvppinfra.so.0

#4  0x00007f811ee886da in vnet_classify_entry_alloc () from /lib64/libvnet.so.0

#5  0x00007f811ee8d8ad in vnet_classify_add_del () from /lib64/libvnet.so.0

#6  0x00007f811ee8e83d in vnet_classify_add_del_session () from /lib64/libvnet.so.0

#7  0x00007f80dcddce3e in l2sess_add_session (b0=b0@entry=0x7f807eda1140, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, session_table=, session_match_next=,

    opaque_index=opaque_index@entry=5912) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:274

#8  0x00007f80dcddd3b7 in l2sess_node_fn (vm=0x7f811f5ce700 <vlib_global_main>, node=0x7f80de7bb800, frame=0x7f80e2a20c00, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, node_is_track=node_is_track@entry=0,

    feat_next_node_index=feat_next_node_index@entry=0x7f80dcfe1ae0 <l2sess_main+384>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:669

#9  0x00007f80dcdddb88 in l2sess_out_ip6_addnode_fn (vm=, node=, frame=) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:762

#10 0x00007f811f38c10a in dispatch_node () from /lib64/libvlib.so.0

#11 0x00007f811f38c2f7 in dispatch_pending_node () from /lib64/libvlib.so.0

#12 0x00007f811f38caab in vlib_main () from /lib64/libvlib.so.0

#13 0x00007f811f5e07a3 in thread0 () from /lib64/libvlib_unix.so.0

#14 0x00007f811e2fed20 in clib_calljmp () from /lib64/libvppinfra.so.0

#15 0x00007ffefe542570 in ?? ()

#16 0x00007f811f5e0fd1 in vlib_unix_main () from /lib64/libvlib_unix.so.0

#17 0x0000000000000000 in ?? ()

  • jdenisco (Mon, 8 May 2017 19:34:56 +0000): Here is the backtrace:

Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib64/libthread_db.so.1".

Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.

Program terminated with signal 6, Aborted.

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

Missing separate debuginfos, use: debuginfo-install vpp-17.01.2-release.x86_64

(gdb) bt

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

#1  0x00007f811d43dce8 in abort () from /lib64/libc.so.6

#2  0x000000000059ae0e in os_panic ()

#3  0x00007f811e332e35 in vec_resize_allocate_memory () from /lib64/libvppinfra.so.0

#4  0x00007f811ee886da in vnet_classify_entry_alloc () from /lib64/libvnet.so.0

#5  0x00007f811ee8d8ad in vnet_classify_add_del () from /lib64/libvnet.so.0

#6  0x00007f811ee8e83d in vnet_classify_add_del_session () from /lib64/libvnet.so.0

#7  0x00007f80dcddce3e in l2sess_add_session (b0=b0@entry=0x7f807eda1140, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, session_table=, session_match_next=,

    opaque_index=opaque_index@entry=5912) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:274

#8  0x00007f80dcddd3b7 in l2sess_node_fn (vm=0x7f811f5ce700 <vlib_global_main>, node=0x7f80de7bb800, frame=0x7f80e2a20c00, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, node_is_track=node_is_track@entry=0,

    feat_next_node_index=feat_next_node_index@entry=0x7f80dcfe1ae0 <l2sess_main+384>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:669

#9  0x00007f80dcdddb88 in l2sess_out_ip6_addnode_fn (vm=, node=, frame=) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:762

#10 0x00007f811f38c10a in dispatch_node () from /lib64/libvlib.so.0

#11 0x00007f811f38c2f7 in dispatch_pending_node () from /lib64/libvlib.so.0

#12 0x00007f811f38caab in vlib_main () from /lib64/libvlib.so.0

#13 0x00007f811f5e07a3 in thread0 () from /lib64/libvlib_unix.so.0

#14 0x00007f811e2fed20 in clib_calljmp () from /lib64/libvppinfra.so.0

#15 0x00007ffefe542570 in ?? ()

#16 0x00007f811f5e0fd1 in vlib_unix_main () from /lib64/libvlib_unix.so.0

#17 0x0000000000000000 in ?? ()

Original issue: https://jira.fd.io/browse/VPP-836

@vvalderrv
Copy link
Contributor Author

Corrective Actions email chain:

On 6 May 2017, at 15:20, Dave Barach (dbarach) <[email protected]> wrote:

Dear Andrew,

 

We should consider these four corrective actions:  
 

  • Transition Metacloud to 17.04.
    • What is the plan?
    • We are not staffed to support historical releases in perpetuity.
    • Let’s be explicit with the Metacloud team on this subject. They’re continuously tripping over old / fixed bugs.
    • We need to improve our internal process so that critical bugfixes are reliably cherry-picked.  

 

Yeah we should chat with Jerome and Enron on this. The way we do releases to day, frankly makes just about zero sense to me - we are stamping the release and then saying it is dead just a few months after it. Unless we have a third party maintain the equivalent of a "stable train" fork which they would maintain themselves (e.g. In this case essentially making 1701 a long-term release), I think we should just stamp the master with labels, and maintain private forks off those labels, that would be much more reflecting of the current reality.

 

 

 

  • Improve the L2 mac address statistics
    • Do not allocate 02:00:00:00:00:01, 02:00:00:00:00:02, or similar.
    • Applicable only if the statistics currently suck

 

That doesn't apply since the entries in question are for IP connections. I don't alloc for mcast stuff unless the ACL is configured for this. And again, the issue in question (assuming I didn't misdiagnose it of course) is not a leak. It is a cycle of requests 2x the previous cycle in a tight loop of adding a single entry. Notice that the other table in the pair has exactly the same connexions except with src/dest swapped, and the memory usage there is drastically less. 

 

 

  • Backport the ACL plugin change [switch to bihash] to 17.01

 

Yeah I am happy to do that - we need to discuss what Emran's/Jerome view is. I think it is more a layer 9 issue more than anything, but then also seems like e.g. CSIT is wonky on 1701. ( at least as far as I saw with the pretty print gerrit, it consistenly bonks out on CSIT with a fairly cryptic error - so some technical effort might be needed. I saw your "recheck", it seems to be a consistent error on CSIT... (why did it stop working ? Did we "clean up" something in CSIT ?)

 

Emran/Jerome - think it might make sense to have a call on Monday so we discuss what the strategy is here ? (As well as for any other real-world customer, frankly)

 

 

  • Dust off the 4368.

 

Independent of the above: 4368 should be rebased, tested carefully, and merged into master only. I would prefer that it not go into 17.01 or 17.04, to avoid making more trouble.

 

Yeah, even if for master or 1704 it doesn't matter unless someone starts to (ab?) use classifier in a dynamic fashion. So this is the code that won't get exercised other than the unit tests. That is a bad thing (tm) in my book. But again maybe I am wrong.

 

 

Please remember: it is not the committer community’s responsibility to drive code reviews to completion. There has been no activity on this Gerrit since Dave Wallace rebased it on 2/1.  

 

When we discussed it, you had me finally almost convinced it ain't gonna happen - so when I heard metacloud weren't using security groups, again using the same principle of the least risk I didn't push it too much, also after hearing that the support strategy for 1701 is "1704 is around the corner, they must switch", both in tête a tête discussion and when we had a meeting in Paris. Of course in the future I will be more "pessimistic" aka "believing" with my assessments when they contradict my gut

 

So, in line with my previous sentence - if we are pulling in the 4368 into master then we will need to also revisit the whole "pathological hash" discussion, because as of present even with 4368 fix in place it will be possible to create arbitrary length linear list lookups in the datapath as based on traffic. What do you think ? 

 

--a

 

 

 

Thanks… Dave

 

 

From: Andrew Yourtchenko (ayourtch)
Sent: Friday, May 5, 2017 7:00 PM
To: Naveen Joy (najoy) <[email protected]>
Cc: John DeNisco (jdenisco) <[email protected]>; Dave Barach (dbarach) <[email protected]>; Ian Wells (iawells) <[email protected]>
Subject: Re: Out of heap in the ACL plugin

 

Dave,

 

Long story short - I think we are witnessing the issue which gerrit 4368 was supposed to address but seems it never made it to the code...

 

The 1704+ I moved to using a bihash so this degenerated hashing case is not a problem anymore.

 

Longer version with proofs and a good bit of fun gdb stuff - http://stdio.be/vpp/t/debugging_vpp.pdf

 

Sorry for the mildly stupid format, but since it was webex remote access and I was making screenshots from my phone, it's about 100mb, and I want to be gentle to your inboxes.

 

John,

 

As we discussed, Mon/tue i am on PTO traveling, but should be on jabber - so please ping me once you decide on the timeline for the call with Dave. I will keep my eye on email, but not realtime.

 

Have a nice weekend.

 

--a

On 5 May 2017, at 20:53, Naveen Joy (najoy) <[email protected]> wrote:

Hi John,

 

If debug logging level is enabled, the vpp-agent log file should give you some pointers around what API calls it was making around the time of the crash.

I have made the logging as detailed as possible in the debug level.

 

Thanks,

Naveen

 

From: "John DeNisco (jdenisco)" <[email protected]>
Date: Friday, May 5, 2017 at 7:26 AM
To: "Andrew Yourtchenko (ayourtch)" <[email protected]>
Cc: "Dave Barach (dbarach)" <[email protected]>, "Ian Wells (iawells)" <[email protected]>, "Naveen Joy (najoy)" <[email protected]>
Subject: Re: Out of heap in the ACL plugin

 

 

Ian,

Naveen,

 

Andrew is on the trail of figuring out what is causing the latest VPP crash at Metacloud. He asked if it is possible to get a trace of the api calls to VPP with relation to the ACLs on a running system?

 

Dave,

 

Is there a way to monitor VPP memory usage on a running system?

 

Thanks,

 

John

 

 

From: "Andrew Yourtchenko (ayourtch)" <[email protected]>
Date: Friday, May 5, 2017 at 10:02 AM
To: John DeNisco <[email protected]>
Cc: "Dave Barach (dbarach)" <[email protected]>
Subject: Re: Out of heap in the ACL plugin

 

John,

 

As we discussed on IM - could be probably the reflexive connections not getting cleaned up. I thought they didn't use the stateful ACL (action=2) ?

 

Getting rid of stateful rules will be an effective workaround, of course.

 

As for more meaningful actions:

 

periodically getting "show l2sess count" debug CLI output and monitoring the session pool size behavior over time will allow to confirm or deny this theory above.

 

Also, since stateful sessions in 1701 are done using usual classifier tables, keeping an eye on their size in the same fashion can help understand the behavior too.

--a

On 5 May 2017, at 15:13, John DeNisco (jdenisco) <[email protected]> wrote:

 

Hi Andrew,

 

Dave suggested I contact you regarding the following crash we are seeing with Metacloud.

 

The backtrace is below.

 

This is 17.01.1 with a couple of unrelated patches.

 

I will get more information from the API trace shortly, but do you have any ideas of what might be causing this.

 

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib64/libthread_db.so.1".

Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.

Program terminated with signal 6, Aborted.

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

Missing separate debuginfos, use: debuginfo-install vpp-17.01.2-release.x86_64

(gdb) bt

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

#1  0x00007f811d43dce8 in abort () from /lib64/libc.so.6

#2  0x000000000059ae0e in os_panic ()

#3  0x00007f811e332e35 in vec_resize_allocate_memory () from /lib64/libvppinfra.so.0

#4  0x00007f811ee886da in vnet_classify_entry_alloc () from /lib64/libvnet.so.0

#5  0x00007f811ee8d8ad in vnet_classify_add_del () from /lib64/libvnet.so.0

#6  0x00007f811ee8e83d in vnet_classify_add_del_session () from /lib64/libvnet.so.0

#7  0x00007f80dcddce3e in l2sess_add_session (b0=b0@entry=0x7f807eda1140, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, session_table=<optimized out>, session_match_next=<optimized out>,

    opaque_index=opaque_index@entry=5912) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:274

#8  0x00007f80dcddd3b7 in l2sess_node_fn (vm=0x7f811f5ce700 <vlib_global_main>, node=0x7f80de7bb800, frame=0x7f80e2a20c00, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, node_is_track=node_is_track@entry=0,

    feat_next_node_index=feat_next_node_index@entry=0x7f80dcfe1ae0 <l2sess_main+384>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:669

#9  0x00007f80dcdddb88 in l2sess_out_ip6_addnode_fn (vm=<optimized out>, node=<optimized out>, frame=<optimized out>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:762

#10 0x00007f811f38c10a in dispatch_node () from /lib64/libvlib.so.0

#11 0x00007f811f38c2f7 in dispatch_pending_node () from /lib64/libvlib.so.0

#12 0x00007f811f38caab in vlib_main () from /lib64/libvlib.so.0

#13 0x00007f811f5e07a3 in thread0 () from /lib64/libvlib_unix.so.0

#14 0x00007f811e2fed20 in clib_calljmp () from /lib64/libvppinfra.so.0

#15 0x00007ffefe542570 in ?? ()

#16 0x00007f811f5e0fd1 in vlib_unix_main () from /lib64/libvlib_unix.so.0

#17 0x0000000000000000 in ?? ()

 

 

@vvalderrv
Copy link
Contributor Author

Here is the backtrace:

Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib64/libthread_db.so.1".

Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.

Program terminated with signal 6, Aborted.

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

Missing separate debuginfos, use: debuginfo-install vpp-17.01.2-release.x86_64

(gdb) bt

#0  0x00007f811d43c5f7 in raise () from /lib64/libc.so.6

#1  0x00007f811d43dce8 in abort () from /lib64/libc.so.6

#2  0x000000000059ae0e in os_panic ()

#3  0x00007f811e332e35 in vec_resize_allocate_memory () from /lib64/libvppinfra.so.0

#4  0x00007f811ee886da in vnet_classify_entry_alloc () from /lib64/libvnet.so.0

#5  0x00007f811ee8d8ad in vnet_classify_add_del () from /lib64/libvnet.so.0

#6  0x00007f811ee8e83d in vnet_classify_add_del_session () from /lib64/libvnet.so.0

#7  0x00007f80dcddce3e in l2sess_add_session (b0=b0@entry=0x7f807eda1140, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, session_table=<optimized out>, session_match_next=<optimized out>,

    opaque_index=opaque_index@entry=5912) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:274

#8  0x00007f80dcddd3b7 in l2sess_node_fn (vm=0x7f811f5ce700 <vlib_global_main>, node=0x7f80de7bb800, frame=0x7f80e2a20c00, node_is_out=node_is_out@entry=1, node_is_ip6=node_is_ip6@entry=1, node_is_track=node_is_track@entry=0,

    feat_next_node_index=feat_next_node_index@entry=0x7f80dcfe1ae0 <l2sess_main+384>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:669

#9  0x00007f80dcdddb88 in l2sess_out_ip6_addnode_fn (vm=<optimized out>, node=<optimized out>, frame=<optimized out>) at /home/jenkins/workspace/vpp-merge-1701-centos7/build-data/../plugins/acl-plugin/acl/l2sess_node.c:762

#10 0x00007f811f38c10a in dispatch_node () from /lib64/libvlib.so.0

#11 0x00007f811f38c2f7 in dispatch_pending_node () from /lib64/libvlib.so.0

#12 0x00007f811f38caab in vlib_main () from /lib64/libvlib.so.0

#13 0x00007f811f5e07a3 in thread0 () from /lib64/libvlib_unix.so.0

#14 0x00007f811e2fed20 in clib_calljmp () from /lib64/libvppinfra.so.0

#15 0x00007ffefe542570 in ?? ()

#16 0x00007f811f5e0fd1 in vlib_unix_main () from /lib64/libvlib_unix.so.0

#17 0x0000000000000000 in ?? ()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant