On 21 May 2018 at 18:06, Mark Millard wrote:
> On 2018-May-21, at 5:46 PM, Mark Millard wrote:
> I should have been explicit that the material is from
> ci.freebsd.org .
Your email seems to always be marked as spam by my MUA even though
SPF, DKIM, and DMARC pass. Apologies if you've sent me oth
On 05/22/18 18:12, Rodney W. Grimes wrote:
it makes me giggle that people still think non-amd64 is "legacy".
i386 is alive and well - new chips are being fabbed based on the 586
design with pci-e slots; not to mention things like the Talos and
AmigaOne for PowerPC.
Yes, some how we need to s
On Tue, May 22, 2018 at 04:32:39PM -0700, K. Macy wrote:
> Why are you running i386 on that?
>
I'm not. Just pointing out that drm2 runs on amd64 as
well as i386. Also need to correct the dis-information
that drm2 only applies to mid-Haswell and older.
In the end, src committers will do what t
On Tue, May 22, 2018 at 07:50:39AM +0100, Johannes Lundberg wrote:
> On Mon, May 21, 2018 at 23:50 Steve Kargl
> wrote:
>
> > On Mon, May 21, 2018 at 03:20:49PM -0700, K. Macy wrote:
> > > >
> > > > I just ask.
> > > > Or why not include drm-next to base svn repo and add some
> > > > option to ma
Why are you running i386 on that?
On Tue, May 22, 2018 at 4:26 PM, Steve Kargl
wrote:
> On Tue, May 22, 2018 at 02:56:55PM -0700, K. Macy wrote:
>> >
>> >
>> > it makes me giggle that people still think non-amd64 is "legacy".
>> >
>> > i386 is alive and well - new chips are being fabbed based on
On Tue, May 22, 2018 at 02:56:55PM -0700, K. Macy wrote:
> >
> >
> > it makes me giggle that people still think non-amd64 is "legacy".
> >
> > i386 is alive and well - new chips are being fabbed based on the 586
> > design with pci-e slots; not to mention things like the Talos and
> > AmigaOne for
> > I am concerned about just shoving it out to ports, as that makes
> > it rot even faster.
> >
> > I am still very concerned that our in base i9xx code is like 4
> > years old and everyone is told to go to kmod-next from ports
> > as well.
> >
> > No, I do not have a solution, but I have not trie
> I am concerned about just shoving it out to ports, as that makes
> it rot even faster.
>
> I am still very concerned that our in base i9xx code is like 4
> years old and everyone is told to go to kmod-next from ports
> as well.
>
> No, I do not have a solution, but I have not tried hard to find
>
> >
> >
> > it makes me giggle that people still think non-amd64 is "legacy".
> >
> > i386 is alive and well - new chips are being fabbed based on the 586
> > design with pci-e slots; not to mention things like the Talos and
> > AmigaOne for PowerPC.
Yes, some how we need to shake off the idea tha
>
>
> it makes me giggle that people still think non-amd64 is "legacy".
>
> i386 is alive and well - new chips are being fabbed based on the 586
> design with pci-e slots; not to mention things like the Talos and
> AmigaOne for PowerPC.
DRM2 doesn't support anything later than mid-Haswell. The ch
> I just ask.
> Or why not include drm-next to base svn repo and add some
> option to make.conf to swith drm2/dem-next ?
Even if it's not being built on amd64 we're still responsible for
keeping it building on !amd64 so long as it's in base. This makes
changing APIs
On Tue, 22 May 2018 10:12:22 +0200 "Alexander Leidinger"
said
Hi,
I've updated 2 machines to r333966 and I see a change in the behavior
in the network area on one of the systems.
To begin with, the "original" behavior was not OK either, the em NIC
fails to "do proper network communicati
On 05/22/18 13:47, dpolyg wrote:
I have one comment regarding usage of the drm2 on a "legacy" hardware.
Excuse me in advance if I misunderstand something.
For the last 2-3 years I'm playing with devices such as small form
factor PCs from Shuttle:
http://global.shuttle.com/products/productsList?
> On Tue, May 22, 2018 at 3:30 PM, Warner Losh wrote:
>
> > You can't, in general. By the time the boot loader starts, all knowledge
> > of past boots is gone, unless specific counter-measures were put in place.
> >
> > However, if root is UFS and read/write in your box, it will be unclean on
> >
On Tue, May 22, 2018 at 04:16:32PM +0200, Alexander Leidinger wrote:
>
> Quoting Slawa Olhovchenkov (from Tue, 22 May 2018
> 15:29:24 +0300):
>
> > On Tue, May 22, 2018 at 08:17:00AM -0400, Steve Wills wrote:
> >
> >> I may be seeing similar issues. Have you tried leaving top -SHa running
> >
On Tue, May 22, 2018 at 3:30 PM, Warner Losh wrote:
> You can't, in general. By the time the boot loader starts, all knowledge
> of past boots is gone, unless specific counter-measures were put in place.
>
> However, if root is UFS and read/write in your box, it will be unclean on
> anything but
You can't, in general. By the time the boot loader starts, all knowledge of
past boots is gone, unless specific counter-measures were put in place.
However, if root is UFS and read/write in your box, it will be unclean on
anything but a clean shutdown/reboot. If it's read-only, ZFS or NFS
mounted,
Quoting Slawa Olhovchenkov (from Tue, 22 May 2018
15:29:24 +0300):
On Tue, May 22, 2018 at 08:17:00AM -0400, Steve Wills wrote:
I may be seeing similar issues. Have you tried leaving top -SHa running
and seeing what threads are using CPU when it hangs? I did and saw pid
17 [zfskern{txg_th
On 05/22/18 10:17, Alexander Leidinger wrote:
Hi,
does someone else experience deadlocks / hangs in ZFS?
Yes, in conjunction with Poudriere, probably when it builds/activates jails.
Not sure this is the same problem you are seeing.
bye
av.
Hi
In the boot process on my test machines I'd like to do different things
depending on the last run was a clean shutdown or kernel panic. Where/How
can I get this information?
Thanks!
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.o
Yesterday I committed some changes to uchcom (so far, only in CURRENT).
Commits are r333997 - r334002.
If you have a CH340/341 based USB<->RS232 adapter and it works for you, could
you please test that it still does?
If you tried your adapter in the past and it did not work, there is a chance it
On Tue, 22 May 2018 10:17:49 +0200
Alexander Leidinger wrote:
> does someone else experience deadlocks / hangs in ZFS?
I did experience ZFS hangs on heavy load on relatively big iron (using
rsync, in my case). Theh was cured by reducing the amount of available
RAM to the zfs caching mechanism. Pa
On Tue, May 22, 2018 at 08:17:00AM -0400, Steve Wills wrote:
> I may be seeing similar issues. Have you tried leaving top -SHa running
> and seeing what threads are using CPU when it hangs? I did and saw pid
> 17 [zfskern{txg_thread_enter}] using lots of CPU but no disk activity
> happening. Do
I may be seeing similar issues. Have you tried leaving top -SHa running
and seeing what threads are using CPU when it hangs? I did and saw pid
17 [zfskern{txg_thread_enter}] using lots of CPU but no disk activity
happening. Do you see similar?
Steve
On 05/22/18 04:17, Alexander Leidinger wrot
Hi,
does someone else experience deadlocks / hangs in ZFS?
What I see is that if on a 2 socket / 4 cores -> 16 threads system I
do a lot in parallel (e.g. updating ports in several jails), then the
system may get into a state were I can login, but any exit (e.g. from
top) or logout of shel
Hi,
I've updated 2 machines to r333966 and I see a change in the behavior
in the network area on one of the systems.
To begin with, the "original" behavior was not OK either, the em NIC
fails to "do proper network communication"
(https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=220997).
On Tue, May 22, 2018 at 8:50 AM, Johannes Lundberg
wrote:
> On Mon, May 21, 2018 at 23:50 Steve Kargl edu>
> wrote:
>
> > On Mon, May 21, 2018 at 03:20:49PM -0700, K. Macy wrote:
> > > >
> > > > I just ask.
> > > > Or why not include drm-next to base svn repo and add some
> > > > option to make.
27 matches
Mail list logo