Cool, thank you very much for the information.

On Wed, Sep 4, 2013 at 6:56 PM, Cory Stoker <[email protected]> wrote:
> We have lots of puppet clients on crappy bandwidth that would time out
> like this as well.  The option we changed to fix this is:
>
>     #Specify the timeout to wait for catalog in seconds
>     configtimeout = 600
>
> The default time is like 60 or 120 secs.  Another thing you should do
> is check out the logs of the web server if you are using passenger.
> You should see a ton of "GET" requests when you need to sync plugins.
> To force your puppet agent to redownload stuff remove the $vardir/lib
> directory on the agent.
>
>
> On Wed, Sep 4, 2013 at 1:48 PM, Pete Hartman <[email protected]> wrote:
>> I'm having a similar problem.
>>
>> I know for a fact that I am not contending with other agents, because this
>> is in a lab environment and none of my agents is scheduled for periodic runs
>> (yet).
>>
>> I have successfully run puppet agent -t first time, signed the cert, and run
>> it a second time to pull over stdlib and other modules on agents running
>> RHEL 6 and Solaris 10u10 x86.
>>
>> But I'm getting this timeout on a Solaris 10u10 box running on a T4-1 SPARC
>> system.
>>
>> This was my third run:
>>
>>  # date;puppet agent -t;date
>> Wed Sep  4 14:12:05 CDT 2013
>> Info: Retrieving plugin
>> Notice: /File[/var/lib/puppet/lib/puppet/parser/functions/count.rb]/ensure:
>> defined content as '{md5}9eb74eccd93e2b3c87fd5ea14e329eba'
>> Notice:
>> /File[/var/lib/puppet/lib/puppet/parser/functions/validate_bool.rb]/ensure:
>> defined content as '{md5}4ddffdf5954b15863d18f392950b88f4'
>> Notice:
>> /File[/var/lib/puppet/lib/puppet/parser/functions/get_module_path.rb]/ensure:
>> defined content as '{md5}d4bf50da25c0b98d26b75354fa1bcc45'
>> Notice:
>> /File[/var/lib/puppet/lib/puppet/parser/functions/is_ip_address.rb]/ensure:
>> defined content as '{md5}a714a736c1560e8739aaacd9030cca00'
>> Error:
>> /File[/var/lib/puppet/lib/puppet/parser/functions/is_numeric.rb]/ensure:
>> change from absent to file failed: execution expired
>>
>> Error: Could not retrieve plugin: execution expired
>> Info: Caching catalog for AGENT
>> Info: Applying configuration version '1378322110'
>> Notice: Finished catalog run in 0.11 seconds
>> Wed Sep  4 14:15:58 CDT 2013
>>
>>
>> Each time I've run it, I get about 10 or so files and then I get "execution
>> expired".
>>
>> What I'd really like to see is whether I can increase the expiry timeout.
>>
>>
>> Some other details:  The master is RHEL 6 on a Sun/Oracle X4800, lots and
>> lots of fast cores and memory.  I'm using Puppet Open Source. I'm using
>> passenger.  I have no real modules other than some basic forge modules I've
>> installed to start out with.
>>
>> [root@MASTER audit]# cd /etc/puppet/modules
>> [root@MASTER modules]# ls
>> apache  concat  epel  firewall  inifile  passenger  puppet  puppetdb  ruby
>> stdlib
>>
>> I briefly disabled SELinux on the master, but saw no change in behavior.
>>
>> I'm certain that the firewall is right because other agents have had no
>> problems.  iptables IS enabled, however.
>>
>> The master and the agent are on the same subnet, so I don't suspect a
>> network performance issue directly.
>>
>> On Solaris, because the vendor supplied OpenSSL is antique and doesn't
>> include SHA256, we have built our own OpenSSL and our own Ruby using that
>> OpenSSL Library.  Even though SPARC is a 64 bit architecture, Ruby seems to
>> default to a 32 bit build, so we built OpenSSL as 32 bit as well to match.
>> I've got an open question to the guy responsible for that to see how hard it
>> would be to try to build Ruby as 64 bit, that's likely a next test.
>>
>> I have not yet run snoop on the communication to see what's going on the
>> network side, but as I say I don't really expect the network to be the
>> problem, between being on the same subnet and success on other systems with
>> higher clock speeds.
>>
>> Any pointers to other possible causes or somewhere I can (even temporarily)
>> increase the timeout would be appreciated.
>>
>>
>>
>>
>> On Thursday, August 8, 2013 8:56:33 AM UTC-5, jcbollinger wrote:
>>>
>>>
>>>
>>> On Wednesday, August 7, 2013 11:46:06 AM UTC-5, Cesar Covarrubias wrote:
>>>>
>>>> I am already using Passenger. My master is still being minimally
>>>> utilized, as I'm just now beginning the deployment process. In terms of
>>>> specs, it is running 4 cores and 8GB of mem and 4GB of swap. During a run,
>>>> the total system usage is no more than 2GB and no swap. No network
>>>> congestion and I/O is low on the SAN which these VMs use.
>>>>
>>>> The odd thing is once the hosts get all the libs sync'd, performance is
>>>> fine on further changes. It's quite perplexing.
>>>>
>>>
>>> To be certain that contention by multiple Puppet clients does not
>>> contribute to the issue, ensure that the problem still occurs when only one
>>> client attempts to sync at a time.  If it does, then the issue probably has
>>> something to do with the pattern of communication between client and master,
>>> for that's the main thing that differs between an initial run and subsequent
>>> ones.
>>>
>>> During the initial plugin sync, the master delivers a moderately large
>>> number of small files to the client, whereas on subsequent runs it usually
>>> delivers only a catalog, and perhaps, later, 'source'd Files declared in
>>> your manifests.  There may be a separate connection established between
>>> client and master for each synced file, and anything that might slow that
>>> down could contribute to the problem.
>>>
>>> For instance, if a firewall on client, master, or any device between makes
>>> it slow or unreliable to establish connections; if multiple clients are
>>> configured with the same IP number; if a router anywhere along the network
>>> path is marginal; if a leg of the path is wireless and subject to
>>> substantial radio interference; if any part of your network is suffering
>>> from a denial-of-service attack; etc. then probabilistically speaking, the
>>> effect would be much more noticeable when a successful transaction requires
>>> many connections and data transfers than when it requires few.
>>>
>>>
>>> John
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at http://groups.google.com/group/puppet-users.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to a topic in the Google 
> Groups "Puppet Users" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/puppet-users/Q7Jry3JAc4U/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/puppet-users.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to