Rich Freeman wrote:
> On Wed, Apr 17, 2024 at 9:33 AM Dale <rdalek1...@gmail.com> wrote:
>> Rich Freeman wrote:
>>
>>> All AM5 CPUs have GPUs, but in general motherboards with video outputs
>>> do not require the CPU to have a GPU built in.  The ports just don't
>>> do anything if this is lacking, and you would need a dedicated GPU.
>>>
>> OK.  I read that a few times.  If I want to use the onboard video I have
>> to have a certain CPU that supports it?  Do those have something so I
>> know which is which?  Or do I read that as all the CPUs support onboard
>> video but if one plugs in a video card, that part of the CPU isn't
>> used?  The last one makes more sense but asking to be sure.
> To use onboard graphics, you need a motherboard that supports it, and
> a CPU that supports it.  I believe that internal graphics and an
> external GPU card can both be used at the same time.  Note that
> internal graphics solutions typically steal some RAM from other system
> use, while an external GPU will have its own dedicated RAM (and those
> can also make use of internal RAM too).
>
> The 7600X has a built-in RDNA2 GPU.   All the original Ryzen zen4 CPUs
> had GPU support, but it looks like they JUST announced a new line of
> consumer zen4 CPUs that don't have it - they all end in an F right
> now.
>
> In any case, if you google the CPU you're looking at it will tell you
> if it supports integrated graphics.  Most better stores/etc have
> filters for this feature as well (places like Newegg or PCPartPicker
> or whatever).
>
> If you don't play games, then definitely get integrated graphics.
> Even if the CPU costs a tiny bit more, it will give you a free empty
> 16x PCIe slot at whatever speed the CPU supports (v5 in this case -
> which is as good as you can get right now).
>

Sounds good.  So right now, if I buy a mobo with a couple video ports,
any current CPU will make the integrated video work.  There is some CPUs
in the works that don't so double check first, just to be sure.  ;-) 

>> I might add, simply right clicking on the desktop can take sometimes 20
>> or 30 seconds for the menu to pop up.  Switching from one desktop to
>> another can take several seconds, sometimes 8 or 10.  This rig is
>> getting slower.  Actually, the software is just getting bigger.  You get
>> my meaning tho.  I bet the old KDE3 would be blazingly fast compared to
>> the rig I ran it on originally.
> That sounds like RAM but I couldn't say for sure.  In any case a
> modern system will definitely help.


When I first built this rig, it was very quick to respond to anything I
did.  Some could be all the hard drives I have installed here, 10 I
think right now, but I think it just takes longer for the software to do
its thing because all that software has gotten larger over the years. 
The same thing happened to my old original rig, AMD 2500+ single core
with I think a few gigabytes of ram.  The software just outgrew the
ability of the hardware to keep up.  I'm thinking of moving my torrent
software to the NAS box.  That thing takes a good bit of bandwidth
itself.  It keeps the drives and network busy.  That can't help any. 

>> I'd get 32GBs at first.  Maybe a month or so later get another 32GB.
>> That'll get me 64Gbs.  Later on, a good sale maybe, buy another 32GB or
>> a 64GB set and max it out.
> You definitely want to match the timings, and you probably want to
> match the sticks themselves.  Also, you generally need to be mindful
> of how many channels you're occupying, though as I understand it DDR5
> is essentially natively dual channel.  If you just stick one DDR4
> stick in a system it will not perform as well as two sticks of half
> the size.  I forget the gory details but I believe it comes down to
> the timings of switching between two different channels vs moving
> around within a single one.  DDR RAM timings get really confusing, and
> it comes down to the fact that addresses are basically grouped in
> various ways and randomly seeking from one address to another can take
> a different amount of time depending on how the new address is related
> to the address you last read.  The idea of "seeking" with RAM may seem
> odd, but recent memory technologies are a bit like storage, and they
> are accessed in a semi-serial manner.  Essentially the latencies and
> transfer rates are such that even dynamic RAM chips are too slow to
> work in the conventional sense.  I'm guessing it gets into a lot of
> gory details with reactances and so on, and just wiring up every
> memory cell in parallel like in the old days would slow down all the
> voltage transitions.

I used a memory finder tool to find what fits that ASUS mobo.  It takes
32GB sticks IN PAIRS.  Also, according to one manufacturer, they come in
matched sets.  A 64GB set costs almost $200.  Still, I can make it on
64GB for a while.  Add another set later.  I got the impression that
installing only one stick might not work to well. 

>> I've looked at server type boards.  I'd like to have one.  I'd like one
>> that has SAS ports.
> So, I don't really spend much time looking at them, but I'm guessing
> SAS is fairly rare on the motherboards themselves.  They probably
> almost always have an HBA/RAID controller in a PCIe slot.  You can put
> the same cards in any PC, but of course you're just going to struggle
> to have a slot free.  You can always use a riser or something to cram
> an HBA into a slot that is too small for it, but then you're going to
> suffer reduced performance.  For just a few spinning disks though it
> probably won't matter.
>
> Really though I feel like the trend is towards NVMe and that gets into
> a whole different world.  U.2 allows either SAS or PCIe over the bus,
> and there are HBAs that will handle both.  Or if you only want NVMe it
> looks like you can use bifurcation-based solutions to more cheaply
> break slots out.
>
> I'm kinda thinking about going that direction when I expand my Ceph
> cluster.  There are very nice NVMe server designs that can get 24
> drives into 2U or whatever, but they are very modern and cost a
> fortune even used it seems.  I'm kinda thinking about maybe getting a
> used workstation with enough PCIe slots free that support bifurcation
> and using one for a NIC and another for 4x U.2 drives.  If the used
> workstation is cheap ($100-200) that is very low overhead per drive
> compared to the server solutions.  (You can also do 4x M.2 instead.)
> These days enterprise U.2 drives are the same price as SATA/M.2 for
> the same feature set, and in U.2 you can get much larger capacity
> drives.  It might be a while before the really big ones start becoming
> cheap though...
>

I don't recall the brand or anything but I saw a mobo that had SAS
ports.  I didn't see any SATA ports.  Or those M.2 things either.  I
remember it being used and very pricey.  I figure it was a older board
that had a specific purpose for it, lots of fast drives I'd guess.  I
suspect as you pointed out, most just install a SAS card, with or
without RAID, and use that.  After all, you get the right PCIe slot and
card, it can be plenty fast. 

I look at servers, those that slide into racks, and they are nifty. 
Most are really fairly small but pack a big punch.  Some have dual or
even quad CPUs in them and lots of slots for memory.  It amazes me how
they cram so much in those little things.  It's no wonder those fans can
sound like a jet engine tho.

This discussion has found me a nifty path to go down.  Now and down the
road as well. 

Dale

:-)  :-) 

Reply via email to