Our public DNS servers are on site as well. I user forwarders (as opposed
to slaves) from our resolvers to our public DNS servers for our internal
domains, and the resolvers still responded for internal domains, even when
the recursive count was high and external domains weren't responding.
On T
In message <53349e66.8050...@ksu.edu>, "Lawrence K. Chen, P.Eng." writes:
>
>
> On 03/26/14 04:02, Sam Wilson wrote:
> > In article ,
> > Jason Brandt wrote:
> >
> >> For now, I've disabled DNS inspection on our firewall, as it is an ancient
> >> Cisco firewall services module, and that seems
On 03/26/14 04:02, Sam Wilson wrote:
> In article ,
> Jason Brandt wrote:
>
>> For now, I've disabled DNS inspection on our firewall, as it is an ancient
>> Cisco firewall services module, and that seems to have stabilized things,
>> but it's only been 30 minutes or so. Until I get a few days
Are you using logs on the bind machine\s?
Eliezer
On 03/25/2014 04:31 PM, Jason Brandt wrote:
We recently migrated to BIND for our internal resolvers, and since the
migration, we are experiencing periods of high recursive client counts,
which will at times cause the BIND server to quit respondi
In message
, Scott
Bertilson writes:
>
> This got me to take a look at "rndc recursing" on one of our servers.
>
> It is disappointing that queries for the same FQDN/type/class from the same
> client (different source port and query ID though) are handled individually
> rather than being merge
This got me to take a look at "rndc recursing" on one of our servers.
It is disappointing that queries for the same FQDN/type/class from the same
client (different source port and query ID though) are handled individually
rather than being merged somehow. Is this because of the ID or the source
p
Thanks guys. I appreciate the input. I don't want to derail the list much
though, as this is supposed to be more BIND than Cisco :)
At this point my BIND installation seems to be stable, so we'll call it
case closed. We do plan on replacing our firewalls in the near future, so
hopefully we won'
-community => $comunity,
-version => 'snmpv1',
-port=> 162
);
if (!defined($session)) {
printf("ERROR: %s.\n", $error);
exit 1;
}
my $svSvcName =
In article ,
Jason Brandt wrote:
> The code on our FWSMs isn't the latest release, so that could be part of
> the issue, but it's been about 16 hours now since I shut it off, and so far
> so good. I would say though with the other load on our firewalls, it's
> highly possible that they were bei
We don't do any NAT at the firewall level, they're all public IPs.
Thanks,
Jason
On Wed, Mar 26, 2014 at 7:51 AM, Timothe Litt wrote:
> DNS inspection doesn't do anything useful; bind does enough validity
> checking. UDP inspection suffices to let return packets thru.
>
> Another thing to bew
rg [mailto:
> bind-users-bounces+paul.thom=dfo-mpo.gc...@lists.isc.org] *On Behalf Of *Jason
> Brandt
> *Sent:* March-26-14 9:09 AM
> *To:* Sam Wilson
> *Cc:* comp-protocols-dns-b...@isc.org
> *Subject:* Re: High recursive client counts
>
>
>
> The code on our FWSMs isn't
DNS inspection doesn't do anything useful; bind does enough validity
checking. UDP inspection suffices to let return packets thru.
Another thing to beware of is NAT - if you do static NAT translation for
your nameservers, be sure to specify no-payload (e.g.
ip nat inside source static tcp/ud
The code on our FWSMs isn't the latest release, so that could be part of
the issue, but it's been about 16 hours now since I shut it off, and so far
so good. I would say though with the other load on our firewalls, it's
highly possible that they were being overloaded. Unfortunately our MRTG
isn't
In article ,
Jason Brandt wrote:
> For now, I've disabled DNS inspection on our firewall, as it is an ancient
> Cisco firewall services module, and that seems to have stabilized things,
> but it's only been 30 minutes or so. Until I get a few days in, I'll keep
> researching.
We used to run DN
Mark,
That's a very good question, and something we had thought of as a
possibility as well. I hadn't seen any good information in relation to
entropy, so I'll check into your link. We had noticed that on other things
as well, due to the virtual environment, but nothing that caused
performance
This might be a dumb answer but as the machine is part of a virtual
server, perhaps you have simply run out of entropy? I know its a
Resolver... but isn't perhaps BIND using Entropy to randomly talk on
different ports to get answers?
What about installing the 'haveged' package,
www.irisa.fr/caps/p
Cathy,
Thank you for your comments. I will continue to investigate, it helps to
have avenues to look down though.
As far as build version, we are aware that we aren't at current stable
release. However we've tried to stick to the distro release as much as
possible, to help streamline patching.
On 25/03/2014 16:14, Jason Brandt wrote:
> Mike,
>
> I appreciate your insight here. We are indeed on virtual systems,
> using enterprise grade hardware as well. I will be doing more
> investigation today, to see if I can duplicate the behavior, which I
> have been able to do recently.
>
> Yo
Mike,
I appreciate your insight here. We are indeed on virtual systems, using
enterprise grade hardware as well. I will be doing more investigation
today, to see if I can duplicate the behavior, which I have been able to do
recently.
Your VM vs Physical point is the thing that got me head scr
Hi Jason,
I've experienced similar things in the past on 9.8. Since then we've
moved to the latest 9.9, but don't think this is at all version specific
(that said, you could obviously try upgrading). I don't have an exact
solution for you, but some ideas of things to check and personal
experienc
20 matches
Mail list logo