On Mon, Nov 18, 2013 at 11:00:55PM +0100, Andreas Tobler wrote:
> I prepared two patches, see below. The amd64 one is reviewed by bde@ and
> the i386 is compile tested by me (runtime is theoretically also done,
> but I'm not sure since I do not have 32-bit apps on my amd64).
Use cc -m32.
>
> The
On Mon, 18 Nov 2013, Adrian Chadd wrote:
Remember that for Netflix, we have a mostly non-cachable workload
(with some very specific exceptions!) and thus we churn through VM
pages at a presitidigious rate. 20gbit sec, or ~ 2.4 gigabytes a
second, or ~ 680,000 4 kilobyte pages a second. It's quit
On Mon, 18 Nov 2013, Alexander Motin wrote:
On 18.11.2013 21:11, Jeff Roberson wrote:
On Mon, 18 Nov 2013, Alexander Motin wrote:
I've created patch, based on earlier work of avg@, to add back
pressure to UMA allocation caches. The problem of physical memory or
KVA exhaustion existed there fo
Hey,
random_harvestq eats much, much CPU on alix2c3:
CPU: Geode(TM) Integrated Processor by AMD PCS (498.06-MHz 586-class CPU)
glxsb0: mem
0xefff4000-0xefff7fff irq 9 at device 1.2 on pci0
Could you please add a sysctl/loader knob for it, or a way to throttle
collection?
Here's top output:
On 18.11.13 23:56, Adrian Chadd wrote:
> [snip]
>
> wiki.freebsd.org/FreeBSD/mips has links to the MIPS emulator setups.
>
> There's no excuse to avoid testing on MIPS. :-)
Np, wasn't aware of that.. :) And I do not shy the work.
Thanks,
Andreas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
while updating my SweetHome3D port I'd like to enable staging support.
The build process is using bsd.java.mk to build via ant.
After removing NO_STAGE and adding ${STAGEDIR} to the respective
directories, I get the following message at *make
[snip]
wiki.freebsd.org/FreeBSD/mips has links to the MIPS emulator setups.
There's no excuse to avoid testing on MIPS. :-)
-adrian
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe,
On 14.11.13 07:00, Konstantin Belousov wrote:
> On Wed, Nov 13, 2013 at 10:18:27PM +0100, Andreas Tobler wrote:
>> On 11.11.13 08:47, Konstantin Belousov wrote:
>>> On Sat, Nov 09, 2013 at 11:16:08PM +0100, Andreas Tobler wrote:
Hi all,
anyone interested in this patch to remove the W
Remember that for Netflix, we have a mostly non-cachable workload
(with some very specific exceptions!) and thus we churn through VM
pages at a presitidigious rate. 20gbit sec, or ~ 2.4 gigabytes a
second, or ~ 680,000 4 kilobyte pages a second. It's quite frightening
and it's only likely to increa
On 18.11.2013 21:11, Jeff Roberson wrote:
On Mon, 18 Nov 2013, Alexander Motin wrote:
I've created patch, based on earlier work of avg@, to add back
pressure to UMA allocation caches. The problem of physical memory or
KVA exhaustion existed there for many years and it is quite critical
now for i
On Mon, 18 Nov 2013, Alexander Motin wrote:
Hi.
I've created patch, based on earlier work of avg@, to add back pressure to
UMA allocation caches. The problem of physical memory or KVA exhaustion
existed there for many years and it is quite critical now for improving
systems performance while
On 18.11.2013 14:10, Adrian Chadd wrote:
On 18 November 2013 01:20, Alexander Motin wrote:
On 18.11.2013 10:41, Adrian Chadd wrote:
So, do you get any benefits from just the first one, or first two?
I don't see much reason to handle that in pieces. As I have described above,
each part has ow
On 18 November 2013 01:20, Alexander Motin wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29, ... ite
On 18.11.2013 11:45, Luigi Rizzo wrote:
On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin mailto:m...@freebsd.org>> wrote:
On 18.11.2013 10:41, Adrian Chadd wrote:
Your patch does three things:
* adds a couple new buckets;
These new buckets make bucket size self-tu
On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29,
On 18.11.2013 10:41, Adrian Chadd wrote:
Your patch does three things:
* adds a couple new buckets;
These new buckets make bucket size self-tuning more soft and precise.
Without them there are buckets for 1, 5, 13, 29, ... items. While at
bigger sizes difference about 2x is fine, at smallest
on 18/11/2013 10:51 Beeblebrox said the following:
>> Is there anything "non-standard" about your configuration?
>> All pools that are present in /boot/zfs/zpool.cache on a root filesystem of a
>> root pool should be automatically imported.
>
> Yes I know that, hence the reason I posted.
>
> ll /
On 2013-11-18 03:51, Beeblebrox wrote:
>> Is there anything "non-standard" about your configuration?
>> All pools that are present in /boot/zfs/zpool.cache on a root filesystem of a
>> root pool should be automatically imported.
> Yes I know that, hence the reason I posted.
>
> ll /boot/zfs shows
> Is there anything "non-standard" about your configuration?
> All pools that are present in /boot/zfs/zpool.cache on a root filesystem of a
> root pool should be automatically imported.
Yes I know that, hence the reason I posted.
ll /boot/zfs shows recently updated zpool.cache =>
-rw-r--r-- 1
Hi!
Your patch does three things:
* adds a couple new buckets;
* reduces some lock contention
* does the aggressive backpressure.
So, do you get any benefits from just the first one, or first two?
-adrian
On 17 November 2013 15:09, Alexander Motin wrote:
> Hi.
>
> I've created patch, base
on 18/11/2013 09:47 Beeblebrox said the following:
> I have root on zfs, which mounts fine on start. I have two other pools, which
> do not get mounted and must be imported each time. Boot falls to single-user
> mode because it cannot find the zfs-related mounts for the two pools in
> question. The
>> Do have zfs_enable="YES" in rc.conf?
Yes, and my ZFS root mounts without problem. Also in /boot/loader.conf:
zfs_load="YES"
opensolaris_load="YES"
vfs.root.mountfrom="zfs:bsds"
#-
#_ZFS_PERFORMANCE
#I have 4G of Ram
vfs.zfs.prefetch_disable=0
#Ram 4GB => 512. Ram 8GB => value 1024
vfs.
22 matches
Mail list logo