Hello list,
im running two freebsd dists , each one in a diferent notebook..
i have a freebsd 8.1 rc2 running in amd64 with 2 cores 2MB mem.. (my old
hp), and a freebsd 9-current running on
a intel i3 (2 cores + 2 logical cores) with 4 MB ram.. and im was noting
that the amd notebook was performi
On 17.07.10 17:45, Edward Tomasz Napierala wrote:
Author: trasz
Date: Sat Jul 17 15:45:20 2010
New Revision: 210194
URL: http://svn.freebsd.org/changeset/base/210194
Log:
Remove updating process count by unionfs. It serves no purpose, unionfs just
needs root credentials for a moment.
Mod
On Fri, 16 Jul 2010 09:58, Gabor Kovesdan wrote:
In Message-Id: <4c406589.7030...@freebsd.org>
Hi folks,
Steven Kreuzer wrote a periodic script to run csup updates with periodic
daily. I've reviewed it and added support for multiple supfiles. I'd like to
commit this to head.
: http://kove
On Sat, 17 Jul 2010, Kostik Belousov wrote:
Note that intr time most likely come from the interrupt threads chewing
the CPU, not from the real interrupt handlers doing something, and definitely
not due to the high interrupt rate, as your vmstat -i output already shown.
Run top in the mode where
On Sat, Jul 17, 2010 at 12:10:26PM -0700, Doug Barton wrote:
> On Sat, 17 Jul 2010, Rui Paulo wrote:
>
> >This doesn't indicate any problem. I suggest you try to figure out what
> >interrupt is causing this by adding printfs or disabling drivers one by
> >one.
>
> I've no idea where to even beg
On Sat, 17 Jul 2010, Rui Paulo wrote:
You can try bisecting the faulty revision.
The problem has been going on for months, the primary symptom for a long
time was the nvidia driver, so I stopped using it for a while hoping
that a solution would magically appear. As of the last 6 weeks or so
On 17 Jul 2010, at 20:10, Doug Barton wrote:
> On Sat, 17 Jul 2010, Rui Paulo wrote:
>
>> This doesn't indicate any problem. I suggest you try to figure out what
>> interrupt is causing this by adding printfs or disabling drivers one by one.
>
> I've no idea where to even begin on something li
On Sat, 17 Jul 2010, Rui Paulo wrote:
This doesn't indicate any problem. I suggest you try to figure out what
interrupt is causing this by adding printfs or disabling drivers one by one.
I've no idea where to even begin on something like that. Given that
there are other -current users who ar
On Sat, Jul 17, 2010 at 11:21 AM, Marco van Lienen
wrote:
> On Sat, Jul 17, 2010 at 10:12:10AM -0700, you (Freddie Cash) sent the
> following to the -current list:
>> >
>> > I have read many things about those differences, but why then does zfs on
>> > opensolaris report more available space whe
On 17 Jul 2010, at 08:17, Doug Barton wrote:
> This is happening after I open a flash video in firefox and watch it for
>> 15 minutes:
>
> root 20 -80- 0K 160K WAIT0 3:38 14.08% intr
>
> After this happens, my system goes into a death spiral and I have to shut it
> down.
>
On 17 Jul 2010, at 19:04, Doug Barton wrote:
> On Sat, 17 Jul 2010, Rui Paulo wrote:
>
>>
>> On 17 Jul 2010, at 08:17, Doug Barton wrote:
>>
>>> This is happening after I open a flash video in firefox and watch it for
15 minutes:
>>>
>>> root 20 -80- 0K 160K WAIT0 3:38
On Sat, Jul 17, 2010 at 10:12:10AM -0700, you (Freddie Cash) sent the following
to the -current list:
> >
> > I have read many things about those differences, but why then does zfs on
> > opensolaris report more available space whereas FreeBSD does not?
> > That would imply that my friend running
On Sat, 17 Jul 2010, Rui Paulo wrote:
On 17 Jul 2010, at 08:17, Doug Barton wrote:
This is happening after I open a flash video in firefox and watch it for
15 minutes:
root 20 -80- 0K 160K WAIT0 3:38 14.08% intr
After this happens, my system goes into a death spiral and
Hi guys,
Semi-regularly (every two-three days) I'm seeing what appears to be some
sort of filesystem wedge. I usually see it initially with web browsers,
but it's possible that's only because it's what produces most disk
activity on this machine. I've seen it with both Opera and Firefox.
Wh
On Sat, Jul 17, 2010 at 3:51 AM, Marco van Lienen
wrote:
> On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the
> following to the -current list:
>> Am 17.07.2010 um 12:14 schrieb Marco van Lienen:
>>
>> > # zpool list pool1
>> > NAME SIZE USED AVAIL CAP HEALTH ALTROO
On Sat, Jul 17, 2010 at 01:04:52PM +0200, you (Stefan Bethke) sent the
following to the -current list:
> >
> > I have read many things about those differences, but why then does zfs on
> > opensolaris report more available space whereas FreeBSD does not?
> > That would imply that my friend runni
On Sunday 11 July 2010 22:14:44 Doug Barton wrote:
> On 07/08/10 14:52, Rene Ladan wrote:
> > On 08-07-2010 22:09, Doug Barton wrote:
> >> On Thu, 8 Jul 2010, John Baldwin wrote:
> >>> These freezes and panics are due to the driver using a spin mutex
> >>> instead of a
> >>> regular mutex for the p
Hi,
i updated my 8.1-PRERELEASE to ZFS version 15. The patch
http://people.freebsd.org/~mm/patches/zfs/v15/head-v15-v3.patch applies fine
and after reboot i upgrade my pool successfully to version 15. Now, after a new
reboot the bootloader can't boot from version 15, it supports only 13. Well,
On Fri, Jul 16, 2010 at 07:04:38PM -0400, Lowell Gilbert wrote:
> Alex Kozlov writes:
> > On Fri, Jul 16, 2010 at 04:27:39PM +0200, Gabor Kovesdan wrote:
> >> Em 2010.07.16. 16:23, Alex Kozlov escreveu:
> >> > On Fri, Jul 16, 2010 at 03:58:33PM +0200, Gabor Kovesdan wrote:
> >> >
> >> > Thousands
Am 17.07.2010 um 12:51 schrieb Marco van Lienen:
> On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the
> following to the -current list:
>> Am 17.07.2010 um 12:14 schrieb Marco van Lienen:
>>
>>> # zpool list pool1
>>> NAMESIZE USED AVAILCAP HEALTH ALTROOT
>>> poo
On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the
following to the -current list:
> Am 17.07.2010 um 12:14 schrieb Marco van Lienen:
>
> > # zpool list pool1
> > NAMESIZE USED AVAILCAP HEALTH ALTROOT
> > pool1 5.44T 147K 5.44T 0% ONLINE -
> ...
> > zfs
Am 17.07.2010 um 12:14 schrieb Marco van Lienen:
> # zpool list pool1
> NAMESIZE USED AVAILCAP HEALTH ALTROOT
> pool1 5.44T 147K 5.44T 0% ONLINE -
...
> zfs list however only shows:
> # zfs list pool1
> NAMEUSED AVAIL REFER MOUNTPOINT
> pool1 91.9K 3.56T 28.0K /po
On Tue, Jul 13, 2010 at 04:02:42PM +0200, you (Martin Matuska) sent the
following to the -current list:
> Dear community,
>
> Feel free to test everything and don't forget to report any bugs found.
When I create a raidz pool of 3 equally sized hdd's (3x2Tb WD caviar green
drives) the reported
> [periodic updating source]
Besides technical feasibility: What is the use case behind it?
Regards
Christof
Am Saturday 17 July 2010 10:00:07 schrieb Matthew Seaman:
> On 17/07/2010 24:04:38, Lowell Gilbert wrote:
> > Alex Kozlov writes:
> >> On Fri, Jul 16, 2010 at 04:27:39PM +0200, Gabor Ko
On 17/07/2010 24:04:38, Lowell Gilbert wrote:
> Alex Kozlov writes:
>
>> On Fri, Jul 16, 2010 at 04:27:39PM +0200, Gabor Kovesdan wrote:
>>> Em 2010.07.16. 16:23, Alex Kozlov escreveu:
On Fri, Jul 16, 2010 at 03:58:33PM +0200, Gabor Kovesdan wrote:
Thousands pc simultaneously try t
This is happening after I open a flash video in firefox and watch it for
15 minutes:
root 20 -80- 0K 160K WAIT0 3:38 14.08% intr
After this happens, my system goes into a death spiral and I have to
shut it down.
vmstat -i
interrupt total rate
26 matches
Mail list logo