Re: GSoC 2025: In-Memory Filesystem for GPU Offloading Tests

2025-03-10 Thread Arijit Kumar Das via Gcc
Hello Andrew,

Thank you for the detailed response! This gives me a much clearer picture
of how things work.

Regarding the two possible approaches:


   - I personally find *Option A (self-contained in-memory FS)* more
   interesting, and I'd like to work on it first.


   - However, if *Option B (RPC-based host FS access)* is the preferred
   approach for GSoC, I’d be happy to work on that as well.


For now, I’ll begin setting up the toolchain and running simple OpenMP
target kernels as suggested. Thanks again for your guidance!

Best regards,
Arijit Kumar Das.

On Mon, 10 Mar, 2025, 10:55 pm Andrew Stubbs,  wrote:

> On 10/03/2025 15:37, Arijit Kumar Das via Gcc wrote:
> > Hello GCC Community!
> >
> > I am Arijit Kumar Das, a second-year engineering undergraduate from NIAMT
> > Ranchi, India. While my major isn’t Computer Science, my passion for
> system
> > programming, embedded systems, and operating systems has driven me toward
> > low-level development. Programming has always fascinated me—it’s like
> > painting with logic, where each block of code works in perfect
> > synchronization.
> >
> > The project mentioned in the subject immediately caught my attention, as
> I
> > have been exploring the idea of a simple hobby OS for my Raspberry Pi
> Zero.
> > Implementing an in-memory filesystem would be an exciting learning
> > opportunity, closely aligning with my interests.
> >
> > I have carefully read the project description and understand that the
> goal
> > is to modify *newlib* and the *run tools* to redirect system calls for
> file
> > I/O operations to a virtual, volatile filesystem in host memory, as the
> GPU
> > lacks its own filesystem. Please correct me if I’ve misunderstood any
> > aspect.
>
> That was the first of two options suggested.  The other option is to
> implement a pass-through RPC mechanism so that the runtime actually can
> access the real host file-system.
>
> Option A is more self-contained, but requires inventing a filesystem and
> ultimately will not help all the tests pass.
>
> Option B has more communication code, but doesn't require storing
> anything manually, and eventually should give full test coverage.
>
> A simple RPC mechanism already exists for the use of printf (actually
> "write") on GCN, but was not necessary on NVPTX (a "printf" text output
> API is provided by the driver).  The idea is to use a shared memory ring
> buffer that the host "run" tool polls while the GPU kernel is running.
>
> > I have set up the GCC source tree and am currently browsing relevant
> files
> > in the *gcc/testsuite* directory. However, I am unsure *where the run
> tools
> > source files are located and how they interact with newlib system calls.*
> > Any guidance on this would be greatly appreciated so I can get started as
> > soon as possible!
>
> You'll want to install the toolchain following the instructions at
> https://gcc.gnu.org/wiki/Offloading and try running some simple OpenMP
> target kernels first.  Newlib isn't part of the GCC repo, so if you
> can't find the files then that's probably why!
>
> The "run" tools are installed as part of the offload toolchain, albeit
> hidden under the "libexec" directory because they're really only used
> for testing. You can find the sources with the config/nvptx or
> config/gcn backend files.
>
> User code is usually written using OpenMP or OpenACC, in which case the
> libgomp target plugins serve the same function as the "run" tools. These
> too could use the file-system access, but it's not clear that there's a
> common use-case for that.  The case should at least fail gracefully
> though (as they do now).
>
> Currently, system calls such as "open" simply return EACCESS
> ("permission denied") so the stub implementations are fairly easy to
> understand (e.g. newlib/libc/sys/amdgcn/open.c).  The task would be to
> insert new code there that actually does something.  You do not need to
> modify the compiler itself.
>
> Hope that helps
>
> Andrew
>
> >
> > Best regards,
> > Arijit Kumar Das.
> >
> > *GitHub:* https://github.com/ArijitKD
> > *LinkedIn:* https://linkedin.com/in/arijitkd
> >
>
>


GSoC 2025: In-Memory Filesystem for GPU Offloading Tests

2025-03-15 Thread Arijit Kumar Das via Gcc
Hello GCC Community!

I am Arijit Kumar Das, a second-year engineering undergraduate from NIAMT
Ranchi, India. While my major isn’t Computer Science, my passion for system
programming, embedded systems, and operating systems has driven me toward
low-level development. Programming has always fascinated me—it’s like
painting with logic, where each block of code works in perfect
synchronization.

The project mentioned in the subject immediately caught my attention, as I
have been exploring the idea of a simple hobby OS for my Raspberry Pi Zero.
Implementing an in-memory filesystem would be an exciting learning
opportunity, closely aligning with my interests.

I have carefully read the project description and understand that the goal
is to modify *newlib* and the *run tools* to redirect system calls for file
I/O operations to a virtual, volatile filesystem in host memory, as the GPU
lacks its own filesystem. Please correct me if I’ve misunderstood any
aspect.

I have set up the GCC source tree and am currently browsing relevant files
in the *gcc/testsuite* directory. However, I am unsure *where the run tools
source files are located and how they interact with newlib system calls.*
Any guidance on this would be greatly appreciated so I can get started as
soon as possible!

Best regards,
Arijit Kumar Das.

*GitHub:* https://github.com/ArijitKD
*LinkedIn:* https://linkedin.com/in/arijitkd


Re: GSoC 2025: In-Memory Filesystem for GPU Offloading Tests

2025-04-05 Thread Arijit Kumar Das via Gcc
Hi,

> Let us know if you need further help; we understand it's not trivial to
get this set up at first.

Sure! To be honest, I haven't had time to completely set up the toolchain
yet (due to classes and ongoing mid-semester examinations). I plan to
finish it as soon as I get some time. I have set up the development
environment though. I have installed Debian 12 (multiboot) onto my laptop
and installed all necessary packages. Newlib and GCC sources are at my hand.

> Do you have access to a system with an AMD GPU supported by GCC, or any
Nvidia GPU?

Yes, my laptop has an Nvidia RTX 4050 GPU, which I believe should work for
nvptx. The only thing that concerns me here, though, is that I was unable
to get the nvidia drivers to work. Installed and reinstalled them a couple
of times, still they won't load. I currently do not have them installed,
but I suppose I'll do something about it soon. Or will the preinstalled
nouveau driver work just fine?

> (I assume, by now you've found the newlib source code?)

Yes! I have browsed through the newlib sources. The sources that we may be
concerned with are apparently at newlib/libc/machine/nvptx. Since, I do not
own an AMD GPU, I guess I should probably focus on nvptx. Right?


> Actually, only for GCN: 'gcc/config/gcn/gcn-run.cc'. For nvptx, it's

> part of <https://github.com/SourceryTools/nvptx-tools>: 'nvptx-run.cc'.

Fine, I'll check this out!


Just also wanted to ask that currently I'm preparing a draft proposal,
would it be alright if I send it here in this thread for your review?


Best regards,

Arijit

On Wed, 2 Apr, 2025, 8:01 pm Thomas Schwinge, 
wrote:

> Hi Arijit, Andrew!
>
> Arijit, welcome to GCC!
>
> On 2025-03-11T03:26:44+0530, Arijit Kumar Das via Gcc 
> wrote:
> > Thank you for the detailed response! This gives me a much clearer picture
> > of how things work.
> >
> > Regarding the two possible approaches:
> >
> >- I personally find *Option A (self-contained in-memory FS)* more
> >interesting, and I'd like to work on it first.
>
> Sure, that's fine.  ..., and we could then still put Option B on top, in
> case that just Option A should turn out to be too easy for you.  ;-)
>
> >- However, if *Option B (RPC-based host FS access)* is the preferred
> >approach for GSoC, I’d be happy to work on that as well.
>
> > For now, I’ll begin setting up the toolchain and running simple OpenMP
> > target kernels as suggested. Thanks again for your guidance!
>
> Let us know if you need further help; we understand it's not trivial to
> get this set up at first.
>
> Do you have access to a system with an AMD GPU supported by GCC, or any
> Nvidia GPU?
>
> Just a few more comments in addition to Andrew's very useful remarks.
> (Thank you, Andrew!)
>
> > On Mon, 10 Mar, 2025, 10:55 pm Andrew Stubbs,  wrote:
> >> On 10/03/2025 15:37, Arijit Kumar Das via Gcc wrote:
> >> > I have carefully read the project description and understand that the
> goal
> >> > is to modify *newlib* and the *run tools* to redirect system calls
> for file
> >> > I/O operations to a virtual, volatile filesystem in host memory, as
> the GPU
>
> Instead of "in host memory", it should be "in GPU memory" (for Option A).
>
> >> > lacks its own filesystem. Please correct me if I’ve misunderstood any
> >> > aspect.
>
> >> > I have set up the GCC source tree and am currently browsing relevant
> files
> >> > in the *gcc/testsuite* directory. However, I am unsure *where the run
> tools
> >> > source files are located and how they interact with newlib system
> calls.*
>
> >> Newlib isn't part of the GCC repo, so if you
> >> can't find the files then that's probably why!
>
> (I assume, by now you've found the newlib source code?)
>
> >> The "run" tools are installed as part of the offload toolchain, albeit
> >> hidden under the "libexec" directory because they're really only used
> >> for testing. You can find the sources with the config/nvptx or
> >> config/gcn backend files.
>
> Actually, only for GCN: 'gcc/config/gcn/gcn-run.cc'.  For nvptx, it's
> part of <https://github.com/SourceryTools/nvptx-tools>: 'nvptx-run.cc'.
>
> >> Currently, system calls such as "open" simply return EACCESS
> >> ("permission denied") so the stub implementations are fairly easy to
> >> understand (e.g. newlib/libc/sys/amdgcn/open.c).
>
> (I assume, by now you've found the corresponding nvptx code in newlib?)
>
> >> The task would be to
> >> insert new code there that actually does something.  You do not need to
> >> modify the compiler itself.
>
>
> Grüße
>  Thomas
>


Re: GSoC 2025: In-Memory Filesystem for GPU Offloading Tests

2025-04-06 Thread Arijit Kumar Das via Gcc
Hi Thomas,

Some updates here.

After some research, I realized that the nouveau driver may not be
sufficient for our workload, so the nvidia drivers are a must, along with
the CUDA toolchain. Fortunately, I got the drivers to work under Debian
when I disabled Secure Boot in the firmware. So we're good to go in this
part!

I now have a solid high-level understanding of the tasks to be undertaken,
which I have briefly described in my proposal. I aim to get the nuts and
bolts as I work on this project.

> Just also wanted to ask that currently I'm preparing a draft proposal,
would it be alright if I send it here in this thread for your review?

Since you might be busy and may not have time to review my draft proposal
here, so I've directly uploaded it to the GSoC portal.

Looking forward to your thoughts!


Best regards,

Arijit

On Thu, 3 Apr, 2025, 11:42 am Arijit Kumar Das, <
arijitkdgit.offic...@gmail.com> wrote:

> Hi,
>
> > Let us know if you need further help; we understand it's not trivial to
> get this set up at first.
>
> Sure! To be honest, I haven't had time to completely set up the toolchain
> yet (due to classes and ongoing mid-semester examinations). I plan to
> finish it as soon as I get some time. I have set up the development
> environment though. I have installed Debian 12 (multiboot) onto my laptop
> and installed all necessary packages. Newlib and GCC sources are at my hand.
>
> > Do you have access to a system with an AMD GPU supported by GCC, or any
> Nvidia GPU?
>
> Yes, my laptop has an Nvidia RTX 4050 GPU, which I believe should work for
> nvptx. The only thing that concerns me here, though, is that I was unable
> to get the nvidia drivers to work. Installed and reinstalled them a couple
> of times, still they won't load. I currently do not have them installed,
> but I suppose I'll do something about it soon. Or will the preinstalled
> nouveau driver work just fine?
>
> > (I assume, by now you've found the newlib source code?)
>
> Yes! I have browsed through the newlib sources. The sources that we may be
> concerned with are apparently at newlib/libc/machine/nvptx. Since, I do not
> own an AMD GPU, I guess I should probably focus on nvptx. Right?
>
>
> > Actually, only for GCN: 'gcc/config/gcn/gcn-run.cc'. For nvptx, it's
>
> > part of <https://github.com/SourceryTools/nvptx-tools>: 'nvptx-run.cc'.
>
> Fine, I'll check this out!
>
>
> Just also wanted to ask that currently I'm preparing a draft proposal,
> would it be alright if I send it here in this thread for your review?
>
>
> Best regards,
>
> Arijit
>
> On Wed, 2 Apr, 2025, 8:01 pm Thomas Schwinge, 
> wrote:
>
>> Hi Arijit, Andrew!
>>
>> Arijit, welcome to GCC!
>>
>> On 2025-03-11T03:26:44+0530, Arijit Kumar Das via Gcc 
>> wrote:
>> > Thank you for the detailed response! This gives me a much clearer
>> picture
>> > of how things work.
>> >
>> > Regarding the two possible approaches:
>> >
>> >- I personally find *Option A (self-contained in-memory FS)* more
>> >interesting, and I'd like to work on it first.
>>
>> Sure, that's fine.  ..., and we could then still put Option B on top, in
>> case that just Option A should turn out to be too easy for you.  ;-)
>>
>> >- However, if *Option B (RPC-based host FS access)* is the preferred
>> >approach for GSoC, I’d be happy to work on that as well.
>>
>> > For now, I’ll begin setting up the toolchain and running simple OpenMP
>> > target kernels as suggested. Thanks again for your guidance!
>>
>> Let us know if you need further help; we understand it's not trivial to
>> get this set up at first.
>>
>> Do you have access to a system with an AMD GPU supported by GCC, or any
>> Nvidia GPU?
>>
>> Just a few more comments in addition to Andrew's very useful remarks.
>> (Thank you, Andrew!)
>>
>> > On Mon, 10 Mar, 2025, 10:55 pm Andrew Stubbs,  wrote:
>> >> On 10/03/2025 15:37, Arijit Kumar Das via Gcc wrote:
>> >> > I have carefully read the project description and understand that
>> the goal
>> >> > is to modify *newlib* and the *run tools* to redirect system calls
>> for file
>> >> > I/O operations to a virtual, volatile filesystem in host memory, as
>> the GPU
>>
>> Instead of "in host memory", it should be "in GPU memory" (for Option A).
>>
>> >> > lacks its own filesystem. Please correct me if I’ve misunderstood any
>> >> > aspect.