[gentoo-dev] [PATCH] ruby-fakegem.eclass: compile ruby31 extensions with gnu17

2024-12-08 Thread Hans de Graaff
The varargs implementation in Ruby 3.2 is not compatible with gnu23. Ruby
3.1 is in security maintenance mode upstream so it is unlikely that the
fixes from Ruby 3.2 will be backported. Ruby 3.1 is EOL in March 2025
and will be removed from Gentoo around that time.

Signed-off-by: Hans de Graaff 
---
 eclass/ruby-fakegem.eclass | 12 
 1 file changed, 12 insertions(+)

diff --git a/eclass/ruby-fakegem.eclass b/eclass/ruby-fakegem.eclass
index eb6257a50cf9..fc78428be714 100644
--- a/eclass/ruby-fakegem.eclass
+++ b/eclass/ruby-fakegem.eclass
@@ -23,6 +23,8 @@ case ${EAPI} in
*) die "${ECLASS}: EAPI ${EAPI:-0} not supported" ;;
 esac
 
+# flag-o-matic is only required for ruby31 support.
+inherit flag-o-matic
 inherit ruby-ng
 
 # @ECLASS_VARIABLE: RUBY_FAKEGEM_NAME
@@ -424,6 +426,16 @@ EOF
 each_fakegem_configure() {
debug-print-function ${FUNCNAME} "$@"
 
+   # Ruby 3.1 has a varargs implementation that is not compatible with
+   # gnu23. Ruby 3.1 is EOL in March 2025 and will be removed shortly
+   # after that.
+   case ${RUBY} in
+   *ruby31)
+   append-flags -std=gnu17
+   filter-flags -std=gnu23
+   ;;
+   esac
+
tc-export PKG_CONFIG
for extension in "${RUBY_FAKEGEM_EXTENSIONS[@]}" ; do
CC=$(tc-getCC) ${RUBY} --disable=did_you_mean -C 
${extension%/*} ${extension##*/} --with-cflags="${CFLAGS}" 
--with-ldflags="${LDFLAGS}" ${RUBY_FAKEGEM_EXTENSION_OPTIONS} || die
-- 
2.45.2




Re: [gentoo-dev] Re: LLVM build strategy

2024-12-08 Thread Eli Schwartz
On 12/8/24 4:45 PM, Sam James wrote:
>> I don't like the idea of spending hours building everything before I'm
>> even able to start running tests, just to learn that LLVM is broken
>> and there's no point in even starting to build the rest.
> 
> I don't follow this bit -- you need the new LLVM merged before you can
> build Clang's tests, right? And if any of it fails to build, it's not
> like we can commit the release or snapshot?
> 
> What am I missing on this bit?


I think the point here is that currently, one can build sys-devel/llvm
with tests enabled, and if it fails, there's no point in also building
sys-devel/clang. But with a monorepo build, you'd have to build llvm,
clang (and various others) first, and then launch tests for llvm and
clang together, only to get a test failure in the llvm tests that
indicates everything else is broken too. Depending on cmake test
ordering, you may also run half the clang tests before hitting the llvm
failures, even.

In theory this could be solved by building monorepo-llvm with
FEATURES=test USE="-clang" to start running tests, and then if that
passes, rebuild llvm again but this time with clang etc. enabled. Not
sure this is actually solving the stated objection...


-- 
Eli Schwartz


OpenPGP_signature.asc
Description: OpenPGP digital signature


[gentoo-dev] Re: LLVM build strategy

2024-12-08 Thread Sam James
Michał Górny  writes:

> On Sun, 2024-12-08 at 04:53 +, Sam James wrote:
>> I fear this sort of assumes we won't switch to monobuild any time soon.
>
> I don't see one precluding the other.  Categories are cheap.  Package
> moves not necessarily, but switching to monorepo will be complete pain
> whether one more package move is involved or not.
>
>> I keep thinking [0] about how sustainable our current setup is:
>> * Fedora moved away from it for >=18 [1].
>> * As we saw with offload, it broke a few times in just a week. So it's
>>   only Gentoo who cares about this configuration AFAIK.
>
> It broke once, and only because the pull request merged preceded my
> changes, and the author dealt with merge conflicts wrong.
>
> That said, it's not like I didn't fix the monorepo build as well this
> week, because it was broken from day one.
>
> We're on our own either way.
>
>
>> * Build time
>> 
>>   Build time for mono LLVM would increase as we're building more
>>   components at least for some users.
>> 
>>   But the time added by
>>   building more components (see below) should be balanced out by ccache if
>>   doing development and also, importantly, not needing to force on all
>>   targets anymore (they keep growing).
>
> I don't see how we would avoid forcing targets if *external* projects
> (wasn't the bug about Rust originally?) can still be broken if you
> change targets.

We'd have LLVM be internally consistent so we wouldn't have to worry
about issues with LLD (where it came up a lot, IIRC). But yeah, Rust
would still be a problem.

>
>>   The cumulative time should be the same (although maybe a bit less
>>   given the targets change) given that most people need the
>>   same set of components because of mesa, firefox, or other things which
>>   need libclang.
>
> So you spend hours building LLVM and Clang.  Then you spend hours
> building everything again because one more packages needs LLD.  Then
> more hours because you've decided to try LLDB.
>

LLD is kind of persusasive given some stuff like FF ends up needing it.

> I've been rebuilding three LLVM versions recently because of cpp-httplib
> changing subslot multiple times recently.  With the proposed change, I'd
> be rebuilding everything instead.
>
> In fact, I've already started considering splitting llvm-debuginfod.
>
>> At the moment, I fear us choosing the non-recommended path gives us very
>> little, and causes a bunch of problems in return.
>
> If you consider being able to have a really working LLVM package "very
> little", so be it.

(I think "really working" is ambiguous but I've still been persuaded by
your other points.)

>
> If you choose to go for monorepo, I'm stepping down, because
> I definitely won't be able to deal with this shit without being able to
> split it into smaller parts.
>
> I don't like the idea that any minor patch (think of compiler-rt that
> regularly needs to be updated for newer glibc) will require spending
> hours rebuilding everything.  In multiple LLVM versions.  For every
> single person, including all the people who don't build compiler-rt
> at all.

... and I find the compiler-rt argument very compelling, given how
brittle the sanitizers are. 

>
> I don't like the idea of not being able to run parts of test suite
> without resorting to ugly hacks.

I think this bit is fair.

> I don't like the idea of spending hours building everything before I'm
> even able to start running tests, just to learn that LLVM is broken
> and there's no point in even starting to build the rest.

I don't follow this bit -- you need the new LLVM merged before you can
build Clang's tests, right? And if any of it fails to build, it's not
like we can commit the release or snapshot?

What am I missing on this bit?

> Or having the test suite fail after a few hours
> because of some minor configuration issue (like the one we'd had with
> libcxx, and I've hit that one three times), and having to start
> everything over again.
>

I don't remember which configuration issue that was, although I could
easily imagine (and we've had it before) where the opposite happens
(such a configuration issue only b/c of our build). But yes, it's a fair
point that we don't have a good way to cleanly/easily just run tests
again without hacks (like avoiding clean) and it doesn't even always
work properly.

>
> And ccache is not a solution.  It's a cheap hack, and a costly one. 
> Maintaining a cache for this thing requires tons of wasted disk space. 
> And unless you go out of the way to reconfigure it, building 2-3 LLVM
> versions will normally push all previous objects of the cache, so it
> won't work for most of the people at all.  Provided they go out of their
> way to configure it in the first place.
>
> In the end, LLVM is a project primarily maintained by people working for
> shitty corporations that only care about being able to build their
> shitty browser written in bad C++.  It sucks we ended up having to
> maintain it, but that's not our 

Re: [gentoo-dev] Re: LLVM build strategy

2024-12-08 Thread Michał Górny
On Sun, 2024-12-08 at 17:23 -0500, Eli Schwartz wrote:
> On 12/8/24 4:45 PM, Sam James wrote:
> > > I don't like the idea of spending hours building everything before I'm
> > > even able to start running tests, just to learn that LLVM is broken
> > > and there's no point in even starting to build the rest.
> > 
> > I don't follow this bit -- you need the new LLVM merged before you can
> > build Clang's tests, right? And if any of it fails to build, it's not
> > like we can commit the release or snapshot?
> > 
> > What am I missing on this bit?
> 
> 
> I think the point here is that currently, one can build sys-devel/llvm
> with tests enabled, and if it fails, there's no point in also building
> sys-devel/clang. But with a monorepo build, you'd have to build llvm,
> clang (and various others) first, and then launch tests for llvm and
> clang together, only to get a test failure in the llvm tests that
> indicates everything else is broken too. Depending on cmake test
> ordering, you may also run half the clang tests before hitting the llvm
> failures, even.

Exactly.  You have to compile everything first, before you start testing
anything.

And I don't think testing order is predictable either. I mean,
technically we could just issue check-llvm, check-clang, and so on
manually, but that's going to be messy and I'm not even sure if I can
figure out all the correct targets to avoid duplication.

And if you think we could hack this around by stubbing src_compile()
and having tests compile things component after component...
the dependencies are pretty much broken everywhere, and you do need to
compile everything before you run check-* targets.

> In theory this could be solved by building monorepo-llvm with
> FEATURES=test USE="-clang" to start running tests, and then if that
> passes, rebuild llvm again but this time with clang etc. enabled. Not
> sure this is actually solving the stated objection...

ccache will save most of the time on rebuilding the same things (except
for tablegen or sphinx, which are pretty slow too), but you'd still be
rerunning the same tests.

-- 
Best regards,
Michał Górny



signature.asc
Description: This is a digitally signed message part


Re: [gentoo-dev] [PATCH] ruby-fakegem.eclass: compile ruby31 extensions with gnu17

2024-12-08 Thread Sam James
Hans de Graaff  writes:

> The varargs implementation in Ruby 3.2 is not compatible with gnu23. Ruby
> 3.1 is in security maintenance mode upstream so it is unlikely that the
> fixes from Ruby 3.2 will be backported. Ruby 3.1 is EOL in March 2025
> and will be removed from Gentoo around that time.
>

LGTM if you've confirmed it fixes an extension build. Add a Closes
tag for https://bugs.gentoo.org/943988?

> Signed-off-by: Hans de Graaff 
> ---
>  eclass/ruby-fakegem.eclass | 12 
>  1 file changed, 12 insertions(+)
>
> diff --git a/eclass/ruby-fakegem.eclass b/eclass/ruby-fakegem.eclass
> index eb6257a50cf9..fc78428be714 100644
> --- a/eclass/ruby-fakegem.eclass
> +++ b/eclass/ruby-fakegem.eclass
> @@ -23,6 +23,8 @@ case ${EAPI} in
>   *) die "${ECLASS}: EAPI ${EAPI:-0} not supported" ;;
>  esac
>  
> +# flag-o-matic is only required for ruby31 support.
> +inherit flag-o-matic
>  inherit ruby-ng
>  
>  # @ECLASS_VARIABLE: RUBY_FAKEGEM_NAME
> @@ -424,6 +426,16 @@ EOF
>  each_fakegem_configure() {
>   debug-print-function ${FUNCNAME} "$@"
>  
> + # Ruby 3.1 has a varargs implementation that is not compatible with
> + # gnu23. Ruby 3.1 is EOL in March 2025 and will be removed shortly
> + # after that.
> + case ${RUBY} in
> + *ruby31)
> + append-flags -std=gnu17
> + filter-flags -std=gnu23
> + ;;
> + esac
> +
>   tc-export PKG_CONFIG
>   for extension in "${RUBY_FAKEGEM_EXTENSIONS[@]}" ; do
>   CC=$(tc-getCC) ${RUBY} --disable=did_you_mean -C 
> ${extension%/*} ${extension##*/} --with-cflags="${CFLAGS}" 
> --with-ldflags="${LDFLAGS}" ${RUBY_FAKEGEM_EXTENSION_OPTIONS} || die



Re: LLVM build strategy (was: Re: [gentoo-dev] [RFC] New categories for LLVM)

2024-12-08 Thread Michał Górny
On Sun, 2024-12-08 at 04:53 +, Sam James wrote:
> I fear this sort of assumes we won't switch to monobuild any time soon.

I don't see one precluding the other.  Categories are cheap.  Package
moves not necessarily, but switching to monorepo will be complete pain
whether one more package move is involved or not.

> I keep thinking [0] about how sustainable our current setup is:
> * Fedora moved away from it for >=18 [1].
> * As we saw with offload, it broke a few times in just a week. So it's
>   only Gentoo who cares about this configuration AFAIK.

It broke once, and only because the pull request merged preceded my
changes, and the author dealt with merge conflicts wrong.

That said, it's not like I didn't fix the monorepo build as well this
week, because it was broken from day one.

We're on our own either way.


> * Build time
> 
>   Build time for mono LLVM would increase as we're building more
>   components at least for some users.
> 
>   But the time added by
>   building more components (see below) should be balanced out by ccache if
>   doing development and also, importantly, not needing to force on all
>   targets anymore (they keep growing).

I don't see how we would avoid forcing targets if *external* projects
(wasn't the bug about Rust originally?) can still be broken if you
change targets.

>   The cumulative time should be the same (although maybe a bit less
>   given the targets change) given that most people need the
>   same set of components because of mesa, firefox, or other things which
>   need libclang.

So you spend hours building LLVM and Clang.  Then you spend hours
building everything again because one more packages needs LLD.  Then
more hours because you've decided to try LLDB.

I've been rebuilding three LLVM versions recently because of cpp-httplib
changing subslot multiple times recently.  With the proposed change, I'd
be rebuilding everything instead.

In fact, I've already started considering splitting llvm-debuginfod.

> At the moment, I fear us choosing the non-recommended path gives us very
> little, and causes a bunch of problems in return.

If you consider being able to have a really working LLVM package "very
little", so be it.

If you choose to go for monorepo, I'm stepping down, because
I definitely won't be able to deal with this shit without being able to
split it into smaller parts.

I don't like the idea that any minor patch (think of compiler-rt that
regularly needs to be updated for newer glibc) will require spending
hours rebuilding everything.  In multiple LLVM versions.  For every
single person, including all the people who don't build compiler-rt
at all.

I don't like the idea of not being able to run parts of test suite
without resorting to ugly hacks.  I don't like the idea of spending
hours building everything before I'm even able to start running tests,
just to learn that LLVM is broken and there's no point in even starting
to build the rest.  Or having the test suite fail after a few hours
because of some minor configuration issue (like the one we'd had with
libcxx, and I've hit that one three times), and having to start
everything over again.


And ccache is not a solution.  It's a cheap hack, and a costly one. 
Maintaining a cache for this thing requires tons of wasted disk space. 
And unless you go out of the way to reconfigure it, building 2-3 LLVM
versions will normally push all previous objects of the cache, so it
won't work for most of the people at all.  Provided they go out of their
way to configure it in the first place.

In the end, LLVM is a project primarily maintained by people working for
shitty corporations that only care about being able to build their
shitty browser written in bad C++.  It sucks we ended up having to
maintain it, but that's not our choice.

-- 
Best regards,
Michał Górny



signature.asc
Description: This is a digitally signed message part


Re: [gentoo-dev] [RFC] New categories for LLVM

2024-12-08 Thread Michał Górny
On Sun, 2024-12-08 at 04:11 +, Sam James wrote:
> I'm not sure if I'm sold on *two*. What happens for stuff like mlir
> where it's not a runtime but it's arguably more of one than core?
> 
> It just doesn't feel like the division works great. Or maybe it's just
> because I feel like llvm-core will keep growing and llvm-runtimes won't.

Well, upstream has a split between "projects" and "runtimes",
and I think it makes sense to follow that.  "compiler-rt" and "libc"
happen to be listed in both, but I suppose it's more of a historical
thing -- I think you're always supposed to be doing the runtimes build
for projects that support it these days.

Right now, we don't package libc, pstl, llvm-libgcc -- so there are
definitely more potential packages to be added.

I suspect that this split will also help with crossdev.  FWIU we only
need to cover runtimes there, since the toolchain itself is cross-
capable by design.

-- 
Best regards,
Michał Górny



signature.asc
Description: This is a digitally signed message part


[gentoo-dev] [PATCH] Introduce llvm-core and llvm-runtimes categories

2024-12-08 Thread Michał Górny
Signed-off-by: Michał Górny 
---
 llvm-core/metadata.xml | 14 ++
 llvm-runtimes/metadata.xml | 14 ++
 2 files changed, 28 insertions(+)
 create mode 100644 llvm-core/metadata.xml
 create mode 100644 llvm-runtimes/metadata.xml

diff --git a/llvm-core/metadata.xml b/llvm-core/metadata.xml
new file mode 100644
index ..014d005edb88
--- /dev/null
+++ b/llvm-core/metadata.xml
@@ -0,0 +1,14 @@
+
+https://www.gentoo.org/dtd/metadata.dtd";>
+
+   
+   The llvm-core category contains packages comprising the LLVM
+   toolchain (normally selected among LLVM components
+   via LLVM_ENABLE_PROJECTS).
+   
+   
+   Kategoria llvm-core zawiera paczki narzędzi programistycznych
+   LLVM (spośród komponentów LLVM wybierane za pośrednictwem
+   LLVM_ENABLE_PROJECTS).
+   
+
diff --git a/llvm-runtimes/metadata.xml b/llvm-runtimes/metadata.xml
new file mode 100644
index ..6b0e1e57d36d
--- /dev/null
+++ b/llvm-runtimes/metadata.xml
@@ -0,0 +1,14 @@
+
+https://www.gentoo.org/dtd/metadata.dtd";>
+
+   
+   The llvm-runtimes category contains packages providing runtimes
+   specific to the LLVM toolchain (normally selected among LLVM
+   components via LLVM_ENABLE_RUNTIMES).
+   
+   
+   Kategoria llvm-runtimes zawiera paczki z bibliotekami, używanymi
+   przez programy skompilowane za pomocą narzędzi LLVM (spośród
+   komponentów LLVM wybierane za pośrednictwem 
LLVM_ENABLE_RUNTIMES).
+   
+
-- 
2.47.1