Re: [PATCH 0/6] v2 of libdiagnostics
Hi David - and thanks for posting an outline for libdiagnostics at https://gcc.gnu.org/wiki/libdiagnostics Currently this shows both libdiagnosts and libdiagnostics-sarif-dump integrated into GCC. Is this the plan or would those be available as a top-level project (the program as an example for the library), possibly with the library sources also pushed to GCC? Oh, and one question as I stumbled over that today: Would libdiagnostics now (or in the future) use libtextstyle for formatting (and another possible sink: HTML)? Simon Am 23.11.2023 um 18:36 schrieb Pedro Alves: Hi David, On 2023-11-21 22:20, David Malcolm wrote: Here's v2 of the "libdiagnostics" shared library idea; see: https://gcc.gnu.org/wiki/libdiagnostics As in v1, patch 1 (for GCC) shows libdiagnostic.h (the public header file), along with examples of simple self-contained programs that show various uses of the API. As in v1, patch 2 (for GCC) is the work-in-progress implementation. Patch 3 (for GCC) adds a new libdiagnostics++.h, a wrapper API providing some syntactic sugar when using the API from C++. I've been using this to "eat my own dogfood" and write a simple SARIF-dumping tool: https://github.com/davidmalcolm/libdiagnostics-sarif-dump Patch 4 (for GCC) is an internal change needed by patch 1. Patch 5 (for GCC) updates GCC's source printing code so that when there's no column information, we don't print annotation lines. This fixes the extra lines seen using it from gas discussed in: https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635575.html Patch 6 (for binutils) is an updated version of the experiment at using the API from gas. Thoughts? Do you have plans on making this a top level library instead? That would allow easily making it a non-optional dependency for binutils, as we could have the library in the binutils-gdb repo as well, for instance. From the Cauldron discussion I understood that the diagnostics stuff doesn't depend on much of GCC's data structures, and doesn't rely on the garbage collector. Is there something preventing that? (Other than "it's-a-matter-of-time/effort", of course.) Pedro Alves
Re: [PATCH] binutils: experimental use of libdiagnostics in gas
Thank you very much for this proof-of-concept use! Inspecting it raises the following questions to me, both for a possible binutils implementation and for the library use in general: * How should the application set the relevant context (often lines are shown before/after)? * Should it be possible to override msgid used to display the warning/error type? If this would be possible then the text sink in messages_init may be adjusted to replace the label with _("Warning") and _("Error"), which would leave the text output "as-is" (if the text sink is configured to not output the source line); this would make it usable without adjusting the testsuite and to adopt to a standard later. Notes for the SARIF output: * the region contains an error, according to the linked JSON spec startColumn has a minimum of 1 (I guess you'd just leave it away if the application did not set it) * the application should have the option to pre-set the sourceLanguage for the diagnostic_manager (maybe even make that a positional argument that needs to be passed but can be NULL) and override it when specifying a region Thanks, Simon Am 06.11.2023 um 23:29 schrieb David Malcolm: Here's a patch for gas in binutils that makes it use libdiagnostics (with some nasty hardcoded paths to specific places on my hard drive to make it easier to develop the API). For now this hardcodes adding two sinks: a text sink on stderr, and also a SARIF output to stderr (which happens after all regular output). For example, without this patch: gas testsuite/gas/all/warn-1.s emits: testsuite/gas/all/warn-1.s: Assembler messages: testsuite/gas/all/warn-1.s:3: Warning: a warning message testsuite/gas/all/warn-1.s:4: Error: .warning argument must be a string testsuite/gas/all/warn-1.s:5: Warning: .warning directive invoked in source file testsuite/gas/all/warn-1.s:6: Warning: .warning directive invoked in source file testsuite/gas/all/warn-1.s:7: Warning: whereas with this patch: LD_LIBRARY_PATH=/home/david/coding-3/gcc-newgit-canvas-2023/build/gcc ./as-new testsuite/gas/all/warn-1.s emits: testsuite/gas/all/warn-1.s:3: warning: a warning message 3 | .warning "a warning message" ;# { dg-warning "Warning: a warning message" } | testsuite/gas/all/warn-1.s:4: error: .warning argument must be a string 4 | .warning a warning message ;# { dg-error "Error: .warning argument must be a string" } | testsuite/gas/all/warn-1.s:5: warning: .warning directive invoked in source file 5 | .warning ;# { dg-warning "Warning: .warning directive invoked in source file" } | testsuite/gas/all/warn-1.s:6: warning: .warning directive invoked in source file 6 | .warning ".warning directive invoked in source file" ;# { dg-warning "Warning: .warning directive invoked in source file" } | testsuite/gas/all/warn-1.s:7: warning: 7 | .warning "";# { dg-warning "Warning: " } | {"$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json";, "version": "2.1.0", "runs": [{"tool": {"driver": {"rules": []}}, "invocations": [{"executionSuccessful": true, "toolExecutionNotifications": []}], "originalUriBaseIds": {"PWD": {"uri": "file:///home/david/coding-3/binutils-gdb/gas/"}}, "artifacts": [{"location": {"uri": "testsuite/gas/all/warn-1.s", "uriBaseId": "PWD"}, "contents": {"text": ";# Test .warning directive.\n;# { dg-do assemble }\n .warning \"a warning message\"\t;# { dg-warning \"Warning: a warning message\" }\n .warning a warning message\t;# { dg-error \"Error: .warning argument must be a string\" }\n .warning\t\t\t;# { dg-warning \"Warning: .warning directive invoked in source file\" }\n .warning \".warning directive invoked in source file\"\t;# { dg-warning \"Warning: .warning directive invoked in source file\" }\n .warning \"\"\t\t\t;# { dg-warning \"Warning: \" }\n"}}], "results": [{"ruleId": "warning", "level": "warning", "message": {"text": "a warning message"}, "locations": [{"physicalLocation": {"artifactLocation": {"uri": "testsuite/gas/all/warn-1.s", "uriBaseId": "PWD"}, "region": {"startLine": 3, "startColumn": 0, "endColumn": 1}, "contextRegion": {"startLine": 3, "snippet": {"text": " .warning \"a warning message\"\t;# { dg-warning \"Warning: a warning message\" }\n"], "relatedLocations": [{"physicalLocation": {"artifactLocation": {"uri": "testsuite/gas/all/warn-1.s", "uriBaseId": "PWD"}, "region": {"startLine": 4, "startColumn": 0, "endColumn": 1}, "contextRegion": {"startLine": 4, "snippet": {"text": " .warning a warning message\t;# { dg-error \"Error: .warning argument must be a string\" }\n"}}}, "message": {"text": ".warning argument
Re: [PATCH 2/2] libdiagnostics: work-in-progress implementation
Thank you for our work and providing this patch. GCC related questions: Is it planned to change GCC diagnostics to use libdiagnostic itself? Is it planned to "directly" add features or would the result for GCC be identical (apart from build changes)? So far it looks like it wouldn't be possible to "just build libdiagnostics", and much less to "just distrubute its source" for that purpose, is it? As building GCC does take a significant amount of resources and system-wide switching to a new GCC version is considered a serious task (distributions commonly stay with their major GCC version for quite some time), I'd search for an option to building a "self-contained" version that does not need the complete necessary toolset and may also be distributed separately. This definitely can come later, too; I _guess_ this would mean moving part of GCCs code in a sub-folder libdiagnostics and use it as subproject for configure/make, with then option to run "make dist" in that subfolder alone, too. The main reason for that would be to allow applications move from their previous own diagnostic to libdiagnostics, if it isn't available on the system they can build and install it as subproject, too; and to be able to build libdiagnostics with a much reduced dependency list. Thank you for considering that, Simon Am 06.11.2023 um 23:29 schrieb David Malcolm: Here's a work-in-progress patch for GCC that adds the implementation of libdiagnostics. Various aspects of this need work; posting now for early feedback on overall direction. For example, the testsuite doesn't yet check the output from the test client programs (and I'm not quite sure of the best way to express that in DejaGnu). gcc/ChangeLog: * Makefile.in (lang_checks): Add check-libdiagnostics. (start.encap): Add libdiagnostics. (libdiagnostics_OBJS): New. ...plus a bunch of stuff hacked up from jit/Make-lang.in. * configure: Regenerate. * configure.ac (check_languages): Add check-libdiagnostics. * input.h: Add FIXME. * libdiagnostics.cc: New file. * libdiagnostics.map: New file. gcc/testsuite/ChangeLog: * libdiagnostics.dg/libdiagnostics.exp: New, based on jit.exp. --- gcc/Makefile.in | 134 +- gcc/configure |2 +- gcc/configure.ac |2 +- gcc/input.h |2 +- gcc/libdiagnostics.cc | 1124 + gcc/libdiagnostics.map| 57 + .../libdiagnostics.dg/libdiagnostics.exp | 544 7 files changed, 1860 insertions(+), 5 deletions(-) create mode 100644 gcc/libdiagnostics.cc create mode 100644 gcc/libdiagnostics.map create mode 100644 gcc/testsuite/libdiagnostics.dg/libdiagnostics.exp diff --git a/gcc/Makefile.in b/gcc/Makefile.in index ff77d3cdc64..8f93ae48024 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -611,7 +611,7 @@ host_xm_defines=@host_xm_defines@ xm_file_list=@xm_file_list@ xm_include_list=@xm_include_list@ xm_defines=@xm_defines@ -lang_checks= +lang_checks=check-libdiagnostics lang_checks_parallelized= lang_opt_files=@lang_opt_files@ $(srcdir)/c-family/c.opt $(srcdir)/common.opt $(srcdir)/params.opt $(srcdir)/analyzer/analyzer.opt lang_specs_files=@lang_specs_files@ @@ -2153,7 +2153,7 @@ all.cross: native gcc-cross$(exeext) cpp$(exeext) specs \ libgcc-support lang.all.cross doc selftest @GENINSRC@ srcextra # This is what must be made before installing GCC and converting libraries. start.encap: native xgcc$(exeext) cpp$(exeext) specs \ - libgcc-support lang.start.encap @GENINSRC@ srcextra + libgcc-support lang.start.encap libdiagnostics @GENINSRC@ srcextra # These can't be made until after GCC can run. rest.encap: lang.rest.encap # This is what is made with the host's compiler @@ -2242,6 +2242,136 @@ cpp$(exeext): $(GCC_OBJS) c-family/cppspec.o libcommon-target.a $(LIBDEPS) \ c-family/cppspec.o $(EXTRA_GCC_OBJS) libcommon-target.a \ $(EXTRA_GCC_LIBS) $(LIBS) + +libdiagnostics_OBJS = libdiagnostics.o \ + libcommon.a + +# FIXME: +# Define the names for selecting jit in LANGUAGES. +# Note that it would be nice to move the dependency on g++ +# into the jit rule, but that needs a little bit of work +# to do the right thing within all.cross. + +LIBDIAGNOSTICS_VERSION_NUM = 0 +LIBDIAGNOSTICS_MINOR_NUM = 0 +LIBDIAGNOSTICS_RELEASE_NUM = 1 + +ifneq (,$(findstring mingw,$(target))) +LIBDIAGNOSTICS_FILENAME = libdiagnostics-$(LIBDIAGNOSTICS_VERSION_NUM).dll +LIBDIAGNOSTICS_IMPORT_LIB = libdiagnostics.dll.a + +libdiagnostics: $(LIBDIAGNOSTICS_FILENAME) \ + $(FULL_DRIVER_NAME) + +else + +ifneq (,$(findstring darwin,$(host))) + +LIBDIAGNOSTICS_AGE = 1 +LIBDIAGNOSTICS_BASENAME = libdiagnostics + +LIBDIAGNOSTICS_SONAME = \ + ${libdir}/$(LIBDIAGNOSTICS_BASENAME).$(LI
Re: [PATCH 2/2] libdiagnostics: work-in-progress implementation
Am 07.11.2023 um 15:59 schrieb David Malcolm: On Tue, 2023-11-07 at 08:54 +0100, Simon Sobisch wrote: Thank you for our work and providing this patch. GCC related questions: Is it planned to change GCC diagnostics to use libdiagnostic itself? No. GCC uses C++ internally, and the internal diagnostic API is written in C++. libdiagnostic wraps up this C++ API in a C interface. GCC would continue using the C++ interface internally. Why not providing both a C and C++ API in libdiagnostics? GNU programs (and also others) are written in both. The benefit of using it withing GCC itself "eat your own dogfood" would be that more or less any need that GCC has is also found in the library... thinking again, this may also make it "too heavy" - not sure. Is it planned to "directly" add features or would the result for GCC be identical (apart from build changes)? So far it looks like it wouldn't be possible to "just build libdiagnostics", and much less to "just distrubute its source" for that purpose, is it? Correct: libdiagnostics is just an extra .cc file within the rest of GCC, and almost all the work is being done in other .cc files. Maybe call that "status quo - initial patch"? ;-) As building GCC does take a significant amount of resources and system-wide switching to a new GCC version is considered a serious task (distributions commonly stay with their major GCC version for quite some time), I'd search for an option to building a "self-contained" version that does not need the complete necessary toolset and may also be distributed separately. It's possible to reduce the resources by disabling bootstrapping, and only enabling a minimal set of languages. I'd see libdiagnostics as coming from the distribution build of GCC. I suppose distributions might want to have a simple build of GCC and ship just the .so/.h file from libdiagnostics from the build. Agreed. But I'm as a "user" would like to have that "easy" option, too. As a maintainer that plans to move to libdiagnostics it would be _very_ helpful to be able to use it as a sub-project, in case it isn't available. This definitely can come later, too; I _guess_ this would mean moving part of GCCs code in a sub-folder libdiagnostics and use it as subproject for configure/make, with then option to run "make dist" in that subfolder alone, too. It would involve a lot of refactoring :) Something to "consider along, do later", I guess. The main reason for that would be to allow applications move from their previous own diagnostic to libdiagnostics, if it isn't available on the system they can build and install it as subproject, too; and to be able to build libdiagnostics with a much reduced dependency list. I can try to come up with a minimal recipe for building gcc if all you want is libdiagnostics Thanks, that already helps a lot. Simon
Re: [PATCH 0/6] v2 of libdiagnostics
Thank you for your efforts. Having the wiki page to track this definitely is useful! I'll have a look at the "real patch" later, likely next week. But for patch 4+5 which look quite clean: can we get an early improvement and inclusion into GCC for those? They only adjust internals and should be well covered by the existing test suite, so we may be able to inspect the other changes from this patchset "alone". Kind Regards, Simon Am 21.11.2023 um 23:20 schrieb David Malcolm: Here's v2 of the "libdiagnostics" shared library idea; see: https://gcc.gnu.org/wiki/libdiagnostics As in v1, patch 1 (for GCC) shows libdiagnostic.h (the public header file), along with examples of simple self-contained programs that show various uses of the API. As in v1, patch 2 (for GCC) is the work-in-progress implementation. Patch 3 (for GCC) adds a new libdiagnostics++.h, a wrapper API providing some syntactic sugar when using the API from C++. I've been using this to "eat my own dogfood" and write a simple SARIF-dumping tool: https://github.com/davidmalcolm/libdiagnostics-sarif-dump Patch 4 (for GCC) is an internal change needed by patch 1. Patch 5 (for GCC) updates GCC's source printing code so that when there's no column information, we don't print annotation lines. This fixes the extra lines seen using it from gas discussed in: https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635575.html Patch 6 (for binutils) is an updated version of the experiment at using the API from gas. Thoughts? David Malcolm (5): libdiagnostics v2: header and examples libdiagnostics v2: work-in-progress implementation libdiagnostics v2: add C++ wrapper API diagnostics: add diagnostic_context::get_location_text diagnostics: don't print annotation lines when there's no column info gcc/Makefile.in | 131 +- gcc/configure |2 +- gcc/configure.ac |2 +- gcc/diagnostic-show-locus.cc | 26 +- gcc/diagnostic.cc | 35 +- gcc/diagnostic.h |2 + gcc/libdiagnostics++.h| 378 + gcc/libdiagnostics.cc | 1306 + gcc/libdiagnostics.h | 602 gcc/libdiagnostics.map| 63 + .../libdiagnostics.dg/libdiagnostics.exp | 544 +++ gcc/testsuite/libdiagnostics.dg/test-dump.c | 55 + .../libdiagnostics.dg/test-error-with-note.c | 57 + .../libdiagnostics.dg/test-error-with-note.cc | 47 + gcc/testsuite/libdiagnostics.dg/test-error.c | 49 + gcc/testsuite/libdiagnostics.dg/test-error.cc | 40 + .../libdiagnostics.dg/test-fix-it-hint.c | 49 + .../libdiagnostics.dg/test-fix-it-hint.cc | 44 + .../libdiagnostics.dg/test-helpers++.h| 28 + .../libdiagnostics.dg/test-helpers.h | 29 + .../libdiagnostics.dg/test-labelled-ranges.c | 52 + .../libdiagnostics.dg/test-labelled-ranges.cc | 43 + .../libdiagnostics.dg/test-logical-location.c | 60 + .../libdiagnostics.dg/test-metadata.c | 54 + .../libdiagnostics.dg/test-multiple-lines.c | 61 + .../libdiagnostics.dg/test-no-column.c| 41 + .../test-note-with-fix-it-hint.c | 52 + .../test-text-sink-options.c | 46 + .../libdiagnostics.dg/test-warning.c | 52 + .../test-write-sarif-to-file.c| 46 + .../test-write-text-to-file.c | 47 + 31 files changed, 4018 insertions(+), 25 deletions(-) create mode 100644 gcc/libdiagnostics++.h create mode 100644 gcc/libdiagnostics.cc create mode 100644 gcc/libdiagnostics.h create mode 100644 gcc/libdiagnostics.map create mode 100644 gcc/testsuite/libdiagnostics.dg/libdiagnostics.exp create mode 100644 gcc/testsuite/libdiagnostics.dg/test-dump.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-error-with-note.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-error-with-note.cc create mode 100644 gcc/testsuite/libdiagnostics.dg/test-error.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-error.cc create mode 100644 gcc/testsuite/libdiagnostics.dg/test-fix-it-hint.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-fix-it-hint.cc create mode 100644 gcc/testsuite/libdiagnostics.dg/test-helpers++.h create mode 100644 gcc/testsuite/libdiagnostics.dg/test-helpers.h create mode 100644 gcc/testsuite/libdiagnostics.dg/test-labelled-ranges.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-labelled-ranges.cc create mode 100644 gcc/testsuite/libdiagnostics.dg/test-logical-location.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-metadata.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-multiple-lines.c create mode 100644 gcc/testsuite/libdiagnostics.dg/test-no-column.c c
COBOL: testsuite and running NIST85 (was: Re: [PATCH][v3] Simple cobol.dg testsuite)
Am 13.03.2025 um 12:49 schrieb Richard Biener: On Thu, 13 Mar 2025, Sam James wrote: Simon Sobisch writes: Thanks for your work on adding a testsuite. Can you please explain why you do this when a complete testsuite exists in autoconf (autotest) format (which roots back to decade of work in GnuCOBOL, with all copyrights for that already with the FSF)? I don't think any of us were aware of it ("we" being "the general GCC developer community", not the COBOL folks, for the purposes of this email) until yesterday when richi mused about it on IRC maybe existing and we went looking out of curiosity. I agree that having that testsuite integrated would be fantastic. Is the existence of this in upstream [1] just unknown (because it was not part of the initial patches [for reasons I not understood])? I would've personally liked to see the NIST testsuite integration at least in the initial patches, but it is what it is. I don't think the GnuCOBOL testsuite was brought up at all (and I think most of us weren't aware of it) in the patch upstreaming discussions. Now that we *are* aware of it, it seems desirable to have for sure. Is the format such a big issue (note: previous discussions elaborated "a test suite is very important and other frontends also use a framework other than dejagnu)? If dejagnu is the way to go: * Shouldn't there be deprecation of autotest in autoconf (of course only if that preference is also outside of gcc)? It's a GCC / GNU toolchain-only preference because it allows easily doing cross + simulator testing, and all of our tools are used to its format. That's indeed the main reason. Thanks for the explanation. That's totally fine. It's definitely not perfect. Years ago (way before I followed GCC), there was talk of replacing dejagnu, just efforts failed. * Shouldn't there be a (at least semi automated) script / migration tool (at least for this specific time in place to convert the "UAT" once into dejagnu format)? Yes. Having testsuite integration is seen as critical at this point. richi just wanted to present this as a non-COBOL person to give us something to play with. Yes, and to give people familiar with how GCC tests are done a place to put regression tests going forward. I do think that integrating the testsuites the COBOLworx folks have is important and of course integrating tests from GNU Cobol is desirable as well. Whether we can or want to integrate tests based on autotest is another question - I'd probably avoid that, even as short-term solution, as such tend to stay forever. I agree. Note: COBOLworx started by using the GnuCOBOL testsuite; even with the current UAT's state it would be a lot of manual work to re-synchronize them, so going one step further to dejagnu seems to not make it much harder either. It will definitely be useful if the "original test file names" (like run_subscripts.at, or at least run_subscripts) are kept somewhere - a comment like "auto-translated from run_subscripts.at" is enough - and maybe they can stay in one file each (I don't know enough about dejagnu to comment on that). The main point is that it seems most reasonable to convert those files into dejagnu format once (so obviously a "script working good enough, not installed" comes into mind), instead of writing it from scratch. What would be nice is to have a common separate test harness you can test an installed compiler against - I'm not sure whether the GNU Cobol test harness or the COBOLworx one qualifies here. The NIST one probably does, but it seems to require "plumbing" that's not part of NIST and that, in implementation, might differ from GNU Cobol to COBOLworx. That's a good opportunity to be picky: it is GnuCOBOL (one word, COBOL in upper-case) :-) And yes: a common separate test harness is most reasonable and that's exactly what the idea of NIST was. If you ever wonder: GnuCOBOL uses make (with one sub-directory per "Module") along with perl [2]. This allows to not only do testing (or just extraction of the files) along with counting and tracking time, but also to automate some of the required "needs manual inspection". And given gcobc, I'd argue that gcobol should not fail the following (and ideally show its superior compile and run time): $> tar -xvf gnucobol-3.*.tar.* $> cd gnucobol-3.*/ $> ./configure # for automake and autoconf doing the setup $> cd tests/cobol85 $> make test COBC=gcobc-15 ... just tried that: gcobol: error: unrecognized command-line option ‘-std=cobol85’ --> seems like the gcobc should drop that and set the right flags for gcobol here (I know, should be on bugzilla, or just fixed) $> make test COBC=gcobc-15 COBC_FLAGS=--debug Compiling EXEC85 program warning: --debug implies -fsta
COBOL: Implementation of STOP RUN / GOBACK [was: [PATCH][v3] Simple cobol.dg testsuite]
> Earlier in this discussion of a testsuite, the question came up about > generating an error return in COBOL source code. > > In COBOL, "GOBACK ERROR 1." is the equivalent of a C "return 1;". > When executed in the initial "top-level" program-id, it results in > the value 1 being passed back to the _start stub. > > "STOP RUN ERROR 1." is the equivalent of (and is in fact implemented > with) "exit(1)". > > Bob D. Let's speak COBOL here and please re-consider if this is the best option available in GCC [note: I'm also interested for the implementation in GnuCOBOL, but that's a tangent for this list]. The syntax of the STOP statement and the rules are the following ("noise" words written in parenthesis, optional items in brackets, pipe gives alternatives within braces): -- STOP RUN [ (WITH) { ERROR | NORMAL } (STATUS) [ { identifier | literal } ] ] rules: literal should be non-zero length, if numeric it should be an integer (no sign, no decimal place); any constraints on the value of the literal or the contents of the identifier are defined by the implementor If the ERROR phrase is specified, the operating system will indicate an error termination of the run unit if such a capability exists within the operating system. If the NORMAL phrase is specified, the operating system will indicate a normal termination of the run unit if such a capability exists within the operating system. During execution of the STOP statement with a literal or an identifier specified, the literal or the contents of the identifier are passed to the operating system. -- exit() allows us to "pass to the operating system" directly; but it doesn't directly say "success" or "fail". Obviously the statements STOP RUN WITH NORMAL STATUS 41 and STOP RUN ERROR 41 Should have a different result for the operating system. As those numbers must be unsigned, it could be reasonable to translate that to exit (41) and exit (-41). While a "STOP RUN ERROR 0" would be possible as well, there could be an implementor-defined constraint (which can be enforced for literals) that zero is not valid. This would mean that STOP RUN == STOP RUN WITH NORMAL STATUS == STOP RUN WITH NORMAL STATUS 0 == exit (0) and STOP RUN WITH ERROR STATUS == STOP RUN WITH ERROR STATUS 1 == exit (-1) Then we have additional the question on how to translate STOP RUN WITH ERROR "Don't do that, Jon!" in which case something like fflush (stderr); fprintf(stderr, "%s\n", "Don't do that, Jon!"); exit (-1); or even something along void cobol_stop_run __attribute__((noreturn)) (int status, char *message) { const int exit_status = error ? status * -1 : status; const FILE* stream = error ? stderr : stdout; const char* prefix = error ? : _("runtime exited with normal status"); if (error) { fflush (stderr ); fprintf (stderr , _("runtime exited with error status %d: %s\n", status, message); exit (-status); } else { fflush (stdout); fprintf (stderr , _("runtime exited with normal status %d: %s\n", status, message); exit (status); } } Side note: I'd highly suggest to keep abort() for runtime-covered error handling (index out of bounds, program not found, ...) Simon
Re: [PATCH][v3] Simple cobol.dg testsuite
Am 13.03.2025 um 21:35 schrieb David Malcolm: On Thu, 2025-03-13 at 12:11 +0100, Simon Sobisch wrote: Thanks for your work on adding a testsuite. Can you please explain why you do this when a complete testsuite exists in autoconf (autotest) format (which roots back to decade of work in GnuCOBOL, with all copyrights for that already with the FSF)? Is the existence of this in upstream [1] just unknown (because it was not part of the initial patches [for reasons I not understood])? Is the format such a big issue (note: previous discussions elaborated "a test suite is very important and other frontends also use a framework other than dejagnu)? If dejagnu is the way to go: * Shouldn't there be deprecation of autotest in autoconf (of course only if that preference is also outside of gcc)? * Shouldn't there be a (at least semi automated) script / migration tool (at least for this specific time in place to convert the "UAT" once into dejagnu format)? Thanks for giving me some context on this, Simon [1]: https://gitlab.cobolworx.com/COBOLworx/gcc-cobol/-/tree/master+cobol/gcc/cobol/UAT Hi Simon Does the UAT testsuite have coverage for what happens on invalid code? For example, in https://gcc.gnu.org/pipermail/gcc-patches/2025-March/677481.html my patch adds test coverage for the output on one kind of typo (or, at least, I tried to, my knowledge of COBOL is essentially 0); I put this in Richard's DejaGnu suite since I have lots of similar tests for other frontends. Yes. That is what the "syn" tests are about, for example https://gitlab.cobolworx.com/COBOLworx/gcc-cobol/-/blob/master+cobol/gcc/cobol/UAT/testsuite.src/syn_value.at # Unexpected return code 0 failure - We don't implement # type checking TODO AT_DATA([prog.cob], [ IDENTIFICATION DIVISION. PROGRAM-ID. prog. DATA DIVISION. WORKING-STORAGE SECTION. * Gnu throws ERROR on next line TODO 01 X-SPACE PIC 999 VALUE SPACE. * Gnu Throws WARNING on next two lines TODO 01 X-ABC PIC 999 VALUE "abc". 01 X-12-3PIC 999 VALUE 12.3. 01 X-123 PIC 999 VALUE 123. * Gnu Throws WARNING on next line 01 X-1234PIC 999 VALUE 1234. PROCEDUREDIVISION. STOP RUN. ]) AT_CHECK([$COMPILE_ONLY prog.cob], [1], , [prog.cob:7:25: error: numeric type X-SPACE VALUE 'SPACES' requires numeric VALUE 7 |01 X-SPACE PIC 999 VALUE SPACE. | ^ prog.cob:9:25: error: numeric type X-ABC VALUE 'abc' requires numeric VALUE 9 |01 X-ABC PIC 999 VALUE "abc". | ^ prog.cob:10:25: error: integer type X-12-3 VALUE '12.3' requires integer VALUE 10 |01 X-12-3PIC 999 VALUE 12.3. | ^ prog.cob:13:25: error: numeric X-1234 VALUE '1234' holds only 3 digits 13 |01 X-1234PIC 999 VALUE 1234. | ^ cobol1: error: failed compiling prog.cob ]) Notes: * in GnuCOBOL we have specific tests that check the "extended" format with context; in all other cases we pass (as part of the $COMPILE above) -fdiagnostics-plain-output - because we only want the user message, not the formatting tested overall - and producing and comparing less output also saves some (minor) computation * I'd definitely suggest that UAT is adjusted similar before the conversion for the same reasons * as hinted in the UAT notes above, GnuCOBOL test results are different, here the original result from the file this test in UAT was based on: prog.cob:6: error: invalid VALUE clause prog.cob:7: warning: numeric value is expected prog.cob:8: warning: value size exceeds data size prog.cob:10: warning: value size exceeds data size [note: according to ISO those should all be errors, but other COBOL implementations don't care and truncate "per MOVE rules", so cobc's default is a warning here] Just for your amusement, that's GC 3.2's output (with -Wall) prog.cob:6: error: invalid VALUE clause 4 |DATA DIVISION. 5 |WORKING-STORAGE SECTION. 6 >01 X-SPACE PIC 999 VALUE SPACE. 7 |01 X-ABC PIC 999 VALUE "abc". 8 |01 X-12-3PIC 999 VALUE 12.3. prog.cob:7: warning: numeric value is expected [-Wothers] 5 |WORKING-STORAGE SECTION. 6 |01 X-SPACE PIC 999 VALUE SPACE. 7 >01 X-ABC PIC 999 VALUE "abc". 8 |01 X-12-3PIC 999 VALUE 12.3. 9 |01 X-123 PIC 999 VALUE 123. prog.cob:8: warning: value size exceeds data size [-Wothers] 6 |01 X-SPACE PIC 999 VALUE SPACE. 7 |01 X-ABC PIC 999 VALUE "
Re: [PATCH][v3] Simple cobol.dg testsuite
Thanks for your work on adding a testsuite. Can you please explain why you do this when a complete testsuite exists in autoconf (autotest) format (which roots back to decade of work in GnuCOBOL, with all copyrights for that already with the FSF)? Is the existence of this in upstream [1] just unknown (because it was not part of the initial patches [for reasons I not understood])? Is the format such a big issue (note: previous discussions elaborated "a test suite is very important and other frontends also use a framework other than dejagnu)? If dejagnu is the way to go: * Shouldn't there be deprecation of autotest in autoconf (of course only if that preference is also outside of gcc)? * Shouldn't there be a (at least semi automated) script / migration tool (at least for this specific time in place to convert the "UAT" once into dejagnu format)? Thanks for giving me some context on this, Simon [1]: https://gitlab.cobolworx.com/COBOLworx/gcc-cobol/-/tree/master+cobol/gcc/cobol/UAT
Re: COBOL: Implementation of STOP RUN / GOBACK
Am 21.03.2025 um 11:50 schrieb Jose E. Marchesi: Am 20.03.2025 um 21:50 schrieb James K. Lowden: On Mar 13, 2025, at 8:04 AM, Simon Sobisch wrote: exit() allows us to "pass to the operating system" directly; but it doesn't directly say "success" or "fail". Obviously the statements STOP RUN WITH NORMAL STATUS 41 and STOP RUN ERROR 41 Should have a different result for the operating system. Or, obviously not. For OSes I'm familiar with, there is no *definition* of success/failure. There's just convention: 0 is success and nonzero failure. Even that is honored in the breach, see diff(1). IMO unless the OS defines success/failure outside the value of the exit status value (above, 41), the COBOL compiler cannot supply meaning to STOP RUN NORMAL or ERROR. It has no meaning in COBOL because it has no meaning outside COBOL. By that reasoning, the two statements above both return 41 because there is no way to say more. It is for the caller to decide what to do. I do not think -41 is an option; the compiler should not make arbitrary changes to the user's data. It is temping to raise(SIG_TERM) for error, but again the 41 is lost. STOP RUN WITH ERROR "Don't do that, Jon!" When no numeric value is supplied, IMO: • STOP RUN WITH NORMAL STATUS becomes exit(EXIT_SUCCESS) • STOP RUN WITH ERROR becomes exit(EXIT_FAILURE) That satisfies the Principle of Least Astonishment. BTW those values are defined by C, not POSIX. --jkl I agree that this could be a reasonable approach: * STOP RUN WITH NORMAL STATUS becomes exit(EXIT_SUCCESS) * STOP RUN WITH ERROR becomes exit(EXIT_FAILURE) * Any text given goes to an internal DISPLAY (_possibly_ WITH ERROR doing a DISPLAY UPON SYSERR) If I'd not now that "some heavy business applications" actually pass the error using specific values (one for deadlock, another for general db issues, one for logic issues, ...) I'd say "screw the numbers - just DISPLAY them". So: STOP RUN EXIT PROGRAM That's not identical, EXIT PROGRAM just leaves the current module (if any, otherwise falls through), should: exit (EXIT_SUCCESS). yes Then: STOP RUN WITH NORMAL STATUS should: fprintf (stderr, "STOPPED WITH NORMAL STATUS %d", number); exit (EXIT_SUCCESS); no, applications already require the option to set that as return code to the OS. Then: STOP RUN WITH ERROR STATUS should: fprintf (stderr, "STOPPED WITH ERROR STATUS %d", number); exit (EXIT_FAILURE); same here - error code needed to OS. Note that: STOP RUN WITH ERROR would be fprintf (stderr, "STOPPED WITH ERROR"); exit (EXIT_FAILURE); and additional both have an optional literal (instead of the number) STOP RUN WITH ERROR "Bad boy" STOP RUN WITH alphanumeric-variable *> possibly containing something like "order 999 was proceeded". In this case we have exit (EXIT_SUCESS/EXIT_FAILURE) and fprintf (stderr, "STOPPED WITH ERROR %s", message); fprintf (stderr, "STOPPED NORMAL WITH MESSAGE %s", message); or similar (and the stop message would likely be not all-caps and put under gettext, of course) Simon
cobol: flags for choosing reference-format (Re: [PATCH]cobol: create new gcc/testsuite/cobol.dg/group1/check_88.cob test)
> Fixed-form, known as "reference format", is still more-or-less > required by IBM. Forced in gcobol with the option "-ffixed-form". > Can be controlled inside a source code file with the compiler > directive ">> SOURCE FORMAT IS FREE" > > Columns 1-6 ignored > Column7 * for comment, - for continuation, and a few other > special things > Column8 Area A -- labels started here > Column 12 Area B -- statements go here > Column 73 and beyond are ignored > > Free-form, is much more forgiving. Forced with "-ffree-form", or from > inside the program with ">> SOURCE FORMAT IS FIXED" > > There is no line limit in free form; the entire program could fit on a > single line. > > GCOBOL uses a heuristic when the format is unspecified. It looks at > first line of source code. If the first six characters are digits or > blanks, it switches to "extended reference format", where the first > six characters are ignored, column seven is the indicator column, *> > comments can start anywhere, and there is no line length limit. This gives three reference-formats: "fixed" "free" and "extended". For two of those we have seen the flags -ffixed-form and -ffree-form, so I'd _guess_ the last one would be -fextended-form. Question: Is there a reason to have multiple flags for that? We dropped this in GnuCOBOL, finding that there are even more reference formats and use now a single one: -fformat=fixed/free/extended/.../auto I'd like to suggest to drop those current three flags to one flag for choosing the reference-format with a value telling the frontend which one to use. Side-note: auto-choosing "extended" was at least confusing for me (and the NIST suite initial compile-try). That likely confused me most, because of not knowing another compiler choosing _that_ format automatically. I'd recommend to choose "fixed" if there are line numbers recognized, but much more, I'd suggest to move from multiple format flags to a single one. Simon
Re: Re: [PATCH] cobol: Fifteen new cobol.dg testscases.
Is there a compelling reason for all this output? As a rule of thumb I'd suggest to only output if the data has an unexpected value or if the test is about DISPLAY. ... maybe the compiler is expected (and already capable?) to know the data values and optimize all comparisons away - in this case a double test (compiler optimizes away, however that would be checked next to the actual DISPLAY test) would be useful. ... just wondering. Simon
Subject: [PATCH] cobol: gcobc wrapper fixes and additions
Note: the check_GNU_style.sh script raises several errors - but it would raise them on the complete script, so I've opted to stay with its style. Side-note: Using this updated wrapper allows to runt the NIST test runner from a configured GnuCOBOL source tree using make COBC=/usr/local/gcobol-dbg/bin/gcobc COBOL_FLAGS="-fixed -g -Q -rpath -Q /usr/local/gcobol-dbg/lib64" (the -fixed is necessary as current gcobol does not deduce the fixed-format in NIST correctly) Kind regards, Simon From 187bffd7ac3fb6097d99fe54e28853bd7aeda637 Mon Sep 17 00:00:00 2001 From: Simon Sobisch Date: Sat, 5 Apr 2025 00:25:36 +0200 Subject: [PATCH] cobol: gcobc wrapper fixes and additions * defaults to dialect GNU (gnucobol) * more ibm and strict dialects supported * Implemented -A, -Q, -E * support known alias "-debug" for "--debug" * fix -P, -T and -W consuming source files * deduce output file name, as done by cobc --- gcc/cobol/gcobc | 101 1 file changed, 84 insertions(+), 17 deletions(-) diff --git a/gcc/cobol/gcobc b/gcc/cobol/gcobc index 93e1bd302a6..f503e53a336 100755 --- a/gcc/cobol/gcobc +++ b/gcc/cobol/gcobc @@ -73,7 +73,7 @@ fi exit_status=0 skip_arg= -opts="$copydir ${dialect:--dialect mf} $includes" +opts="$copydir $includes" mode=-shared incomparable="has no comparable gcobol option" @@ -103,6 +103,9 @@ $0 recognizes the following GnuCOBOL cobc output mode options: $0 recognizes the following GnuCOBOL cobc compilation options: -C -d, --debug + -D + -A + -Q -E -g --coverage @@ -115,8 +118,9 @@ $0 recognizes the following GnuCOBOL cobc compilation options: -h, --help -save-temps= -save-temps - -std=mvs - -std=mf + -std=mvs -std=mvs-strict + -std=mf -std=mf-strict + -std=cobol85 -std=cobol2002 -std=cobol2014 Options that are the same in gcobol and cobc are passed through verbatim. Options that have no analog in gcobol produce a warning message. To produce this message, use -HELP. @@ -127,6 +131,10 @@ To override, set the gcobol environment variable. EOF } +dialect="gnu" +out_set="" +first="" + # # Simply iterate over the command-line tokens. We can't use getopts # here because it's not designed for single-dash words (e.g. -shared). @@ -142,14 +150,23 @@ do if [ "$pending_arg" ] then - opts="$opts $pending_arg $opt" + opts="$opts $pending_arg$opt" pending_arg= continue fi case $opt in - -A | -Q) warn "$opt" - ;; + + # pass next parameter to GCC + -A) + pending_arg=" " + ;; + + # pass next parameter to linker + -Q) + pending_arg=-Wl, + ;; + -b) mode="-shared" ;; -c) mode="-c" @@ -158,10 +175,13 @@ do ;; -C) error "$opt $incomparable" ;; - -d | --debug) opts="$opts -fcobol-exceptions=EC-ALL" + -d | -debug | --debug) opts="$opts -fcobol-exceptions=EC-ALL" warn "$opt implies -fstack-check:" ;; - # -D + # define for preprocessor, note: -D* is directly passed + -D) + pending_arg=$opt + ;; -E) opts="$opts $opt -fsyntax-only" ;; -echo) echo="echo" @@ -172,7 +192,7 @@ do opts="$opts $opt" ;; -ext) - pending_arg=$opt + pending_arg="$opt " ;; -ext=*) opts="$opts $(echo "$opt" | sed 's/-ext=/-copyext ./')" ;; @@ -354,7 +374,7 @@ do -fnot-register=*) warn "$opt" ;; -fregister=*) warn "$opt" ;; - -fformat=auto ) ;; # gcobol and gnucobol default + -fformat=auto) ;; # gcobol and gnucobol default -fixed | --fixed | -fformat=fixed | -fformat=variable | -fformat=xcard) # note: variable + xcard are only _more similar_ to fixed than free, @@ -362,7 +382,7 @@ do opts="$opts -ffixed-form" ;; - -F | -free | --free | -fformat=free | -fformat=* ) + -F | -free | --free | -fformat=free | -fformat=*) # note: "all other formats" are only _more similar_ to free than fixed opts="$opts -ffree-form" ;; @@ -405,20 +425,32 @@ do ;; # -shared is identical - -std=mvs) opts="$opts -dialect ibm" + -std=mvs | -std=mvs-strict | -
Re: [PATCH][RFC] [cobol] change cbl_field_data_t::etc_t::value from _Float128 to tree
> Now a question on COBOL: > > 77 var8 PIC 999V9(8) COMP-5 > > what precision/digits does this specify? When then doing > > add 0.0001 TO var555 giving var8 rounded > > what's the precision/digit specification for the literal floating > point values and how does that or that of var555 or var8 > "promote" or "demote" to a common specification for the arithmetic > operation? > > Does the COBOL standard expect a literal floating point value to > be represented exactly? Maybe rounding then only applies at the > "giving var8 rounded" point, forcing the exact value to the > specification of var8? > > Richard. That's NOT a floating-point in COBOL, but a fixed-point numeric literal. It is best understood with the following explanation from the standard: > An integer literal is a fixed-point numeric literal that contains no decimal point. --> it is to be used as integer with an implied decimal point after the first position, as if it would be defined with a PIC 9v floating-point literals in COBOL would have the following most important difference: > A floating-point numeric literal is formed from two fixed-point numeric literals separated by the letter 'E' without intervening spaces. The requested *minimal* precision - "stored exactly" is: * 31 digits for fixed-point numeric literals (and effectively all calculations with those, before rounding/truncation applies) * up to 36 digits of significand with up to 4 digits in exponent for floating-point literals Back to your arithmetic question: COBOL standard expects that to be exact for an intermediate value up to these sizes. If the sizes get bigger and cannot be stored (the maximum is implementor-defined = "should be documented in the user-documentation") then there's the option to specify which INTERMEDIATE ROUNDING should apply (= per program), which includes ROUNDING/TRUNCATION/PROHIBITED (=an exception raised). The default for INTERMEDIATE ROUNDING dependens on which arithmetic is in effect (also per program defined, NATIVE is default where rounding is implementor-defined, STANDARD-BINARY and STANDARD-DECIMAL can be chosen). > If the NATIVE phrase is specified [or implied], the techniques used in handling arithmetic expressions and intrinsic functions shall be those specified by the implementor, [as well as] the techniques used in handling arithmetic statements [mostly for rounding/truncation]. --> note that STANDARD-DECIMAL arithmetic is no optional feature, so for full compliance that has to be considered - but can also be "ignored in detail" for now; note that the OPTIONS paragraph (support for the paragraph is required since COBOL2002) is only supported by GnuCOBOL (neither IBM nor Micro Focus nor Fujitsu support it and all go with "native arithmetic" only). also mind that COBOL2014 has a note: > Implementors are strongly encouraged to provide support for the STANDARD-DECIMAL phrase of the ARITHMETIC clause ... sorry for another "background-drift", getting to your question again: * intermediate values need to cover at least the sizes outlined above * for bigger sizes truncation/rounding may apply - the rules that IBM, MF and GnuCOBOL applies are different (with GnuCOBOL being nearly infinite using GMP with an internal scale as intermediate representation) * the truncation/rounding that applies for a statement can be specified at the statement (as in your example where "rounded" only applies to this final "store to the receiving item") with the default being able to be configured in the OPTIONS paragraph "DEFAULT ROUNDED MODE" Hope this helps with some background and answers your questions, Simon
Re: [committed] cobol: Eliminate cobolworx UAT errors when compiling with -Os
Nice that you have that covered, and thanks for sharing your way there. I just want to offer an additional thing to consider: different environments have a different byte-size for the return code (most common on COBOL for PC is 4, but 2 and 8 also exist). Micro Focus COBOL and GnuCOBOL have a dialect option (effectively a compiler switch) to set the return code size *per compile-unit*. I'm not 100% sure if the one in libcobol is needed, but you likely want to consider increasing its _storage_ to 4 or 8 bytes; this way you don't have to cast on stop_run and can just assign as necessary. Just my 5ct, Simon
COBOL: why not getting UAT/NIST "temporarily" to trunk (was: [PATCH] cobol, v2: Get rid of __int128 uses in the COBOL FE [PR119242])
> > Implicit criticism about tests accepted. I have 679 UAT tests, and > > now I've got the bit in my teeth, and I am creating a process that > > will convert as many as I can to DejaGnu. However: the autom4te and > > DejaGnu principles, practices, and philosophies are almost, but not > > quite, completely unlike each other. > > Didn't mean to criticize in any way, just mention that because we have > just 23 tests so far, the UAT/NIST testing you are doing is a > precondition for any non-trivial FE changes and that testing wasn't > done on my side. While I'm feeling like the old grumpy man I possibly am sometimes by asking the same question over and over again... let me make this explicit one time to this list (as I've not seen a response on-list to that, sorry if I missed it): Is there any reason at all why the UAT should not be added to trunk, like immediately, even if it is only temporary? I totally see the point that sticking with what the project *mostly* uses (not all frontends do use DejaGnu!) is reasonable. I also see that the way it is setup (running the same tests multiple times with different options) is better than what the UAT currently does (I can offer help to do something similar with autotest generated testsuite as we did do that in GnuCOBOL in the past), but I mostly see: * there are very reasonable testcases * there are a lot testcases * again and again contributors don't know if their changes are fine, because the DejaGnu part doesn't cover them; and patches sent in need to be tested by COBOLworx * UAT doesn't need any additional software - it is just autoconf, make, shell and is portable in general (cross-compiling without executing would need adjustments "AT_SKIP" or assigning a shell script that exits with 77 instead of running the program) * the whole testsuite is either already copyrighted by the FSF (the parts that were taken from GnuCOBOL) or COBOLworx (changes and new tests) - and the tests converted to DejaGnu are also contributed by COBOLworx so there should be no problem in contributing UAT directly * converting tests to DejaGnu takes time from Bob, and there's only a single Bob available - while bugzilla gets new entries that would also need his focus Overall my conclusion is: Time seems to be spent MUCH more useful for everyone if UAT would just be added as-is for now. I should be able to provide some patches to run it multiple times with different options (like DejaGnu does), if wanted, as well. GCC-15 could ship with the generated testsuites, so the user does not need any additional tools as well, and if really wanted all of UAT could be converted to DejaGnu for GCC-16. Plain question again: Is there any reason at all why the UAT should not be added to trunk, like immediately and *potentially* the move to DejaGnu be postponed? A _separate_ issue is the question if NIST could be checked in as well, configured to not run automatically (so independent from users or build recipes) but this way being possible to be executed on the contributor side easily? If not: what about making the "nist" sub-folder a tarball that can just be extracted to gcc/cobol and then be used on the contributor side (there's no autotools stuff in there, so that would be possible)? Kind regards and thanks for your work on COBOL, Simon
cobol.1 fix for not using underscores in intrinsic function names
just stumbled over this and only have a mail client running, so... patch as text. The change is in all those cases: change _ (likely parsed from the parser or similar) to -. Kind regards, Simon -BASECONVERT BIT_OF BIT_TO_CHAR BOOLEAN_OF_INTEGER BYTE_LENGTH +BASECONVERT BIT-OF BIT-TO-CHAR BOOLEAN-OF-INTEGER BYTE-LENGTH .It -CHAR CHAR_NATIONAL COMBINED_DATETIME CONCAT CONVERT COS CURRENT_DATE +CHAR CHAR-NATIONAL COMBINED-DATETIME CONCAT CONVERT COS CURRENT-DATE .It -DATE_OF_INTEGER DATE_TO_MMDD DAY_OF_INTEGER DAY_TO_DDD DISPLAY_OF +DATE-OF-INTEGER DATE-TO-MMDD DAY-OF-INTEGER DAY-TO-DDD DISPLAY-OF .It -E EXCEPTION_FILE -EXCEPTION_FILE_N EXCEPTION_LOCATION EXCEPTION_LOCATION_N -EXCEPTION_STATEMENT EXCEPTION_STATUS EXP EXP10 +E EXCEPTION-FILE +EXCEPTION-FILE-N EXCEPTION-LOCATION EXCEPTION-LOCATION-N +EXCEPTION-STATEMENT EXCEPTION-STATUS EXP EXP10 .It -FACTORIAL FIND_STRING -FORMATTED_CURRENT_DATE FORMATTED_DATE FORMATTED_DATETIME -FORMATTED_TIME FRACTION_PART +FACTORIAL FIND-STRING +FORMATTED-CURRENT-DATE FORMATTED-DATE FORMATTED-DATETIME +FORMATTED-TIME FRACTION-PART .It -HEX_OF HEX_TO_CHAR HIGHEST_ALGEBRAIC +HEX-OF HEX-TO-CHAR HIGHEST-ALGEBRAIC .It -INTEGER INTEGER_OF_BOOLEAN INTEGER_OF_DATE INTEGER_OF_DAY -INTEGER_OF_FORMATTED_DATE INTEGER_PART +INTEGER INTEGER-OF-BOOLEAN INTEGER-OF-DATE INTEGER-OF-DAY +INTEGER-OF-FORMATTED-DATE INTEGER-PART .It -LENGTH LOCALE_COMPARE -LOCALE_DATE LOCALE_TIME LOCALE_TIME_FROM_SECONDS LOG LOG10 LOWER_CASE -LOWEST_ALGEBRAIC +LENGTH LOCALE-COMPARE +LOCALE-DATE LOCALE-TIME LOCALE-TIME-FROM-SECONDS LOG LOG10 LOWER-CASE +LOWEST-ALGEBRAIC .It -MAX MEAN MEDIAN MIDRANGE MIN MOD MODULE_NAME +MAX MEAN MEDIAN MIDRANGE MIN MOD MODULE-NAME .It -NATIONAL_OF NUMVAL NUMVAL_C NUMVAL_F ORD +NATIONAL-OF NUMVAL NUMVAL-C NUMVAL-F ORD .It -ORD_MAX ORD_MIN +ORD-MAX ORD-MIN .It -PI PRESENT_VALUE +PI PRESENT-VALUE .It RANDOM RANGE REM REVERSE .It -SECONDS_FROM_FORMATTED_TIME -SECONDS_PAST_MIDNIGHT SIGN SIN SMALLEST_ALGEBRAIC SQRT -STANDARD_COMPARE STANDARD_DEVIATION SUBSTITUTE SUM +SECONDS-FROM-FORMATTED-TIME +SECONDS-PAST-MIDNIGHT SIGN SIN SMALLEST-ALGEBRAIC SQRT +STANDARD-COMPARE STANDARD-DEVIATION SUBSTITUTE SUM .It -TAN TEST_DATE_MMDD TEST_DAY_DDD TEST_FORMATTED_DATETIME -TEST_NUMVAL TEST_NUMVAL_C TEST_NUMVAL_F TRIM +TAN TEST-DATE-MMDD TEST-DAY-DDD TEST-FORMATTED-DATETIME +TEST-NUMVAL TEST-NUMVAL-C TEST-NUMVAL-F TRIM
Re: cobol: flags for choosing reference-format
Am 17.03.2025 um 18:25 schrieb James K. Lowden: On Sun, 16 Mar 2025 21:07:39 +0100 Simon Sobisch wrote: This gives three reference-formats: "fixed" "free" and "extended". For two of those we have seen the flags -ffixed-form and -ffree-form, so I'd _guess_ the last one would be -fextended-form. Question: Is there a reason to have multiple flags for that? [I think this thread belongs in gcc@ because there's no patch to discuss. I'm answering here for the sake of continuity.] -ffixed-form and -ffree-form are the names gfortran uses. To get "logical reference format" -- unlimited lines with the first 6 columns ignored and indicator column 7, we have -findicator-column=7 It's not a great name, not least because it seems invertible but is not. ("-fno-indicator-column=n" makes no sense.) OTHO it says what it means: the location of the indicator column, with no mention of a line length limit (because there isn't one). -fformat=fixed/free/extended/.../auto The problem here IMO is the burden of names. Each combination of left/right margin needs a name, all of which are arbitrary. "Extended" from what, and to what? Every compiler seems to have its own variation. "extended" came from Bob's notes, not mine :-) In general: words are just words, you can choose "arbitrary but reasonable ones". If -- if -- we were to support other formats I'd be inclined to use -source-format from[-to] so the user says where the indicator column is, and what the maximum length is, if any. So, -ffixed-formis -source-format 7-72 -ffree-form is -source-format 1 (logical ref) is -source-format 7 (no indicator) is -source-format 0 with the implied rule that, if the first column is 1, then '*' is honored as a comment, else the character is part of the COBOL text. That covers... only a very small subset. Here is GnuCOBOL's documentation on source formats: -- @node Source format @subsection Source format GnuCOBOL supports fixed, free, Micro Focus' Variable, X/Open Free-form, ICOBOL xCard and Free-form, ACUCOBOL-GT Terminal, and COBOLX source formats. By default, the compiler tries to autodetect the format using the indicator on the first line, using the fixed format for correct indicators and the free format for incorrect ones. This can be overridden either by the @code{>>SOURCE [FORMAT] [IS] @{FIXED|FREE|COBOL85|VARIABLE|XOPEN|XCARD|CRT|TERMINAL|COBOLX|AUTO@}} directive, or by one of the following options: @table @code @item -free, -F, -fformat=free Free format. The program-text area starts in column 1 and continues till the end of line (effectively 255 characters in GnuCOBOL). @item -fixed, -fformat=fixed Fixed format. Source code is divided into: columns 1-6, the sequence number area; column 7, the indicator area; columns 8-72, the program-text area; and columns 72-80 as the reference area. @footnote{Historically, fixed format was based on 80-character punch cards.} @item -fformat=cobol85 Fixed format with enforcements on the use of Area A. @item -fformat=variable Micro Focus' Variable format. Identical to the fixed format above except for the program-text area which extends up to column 250 instead of 72. @item -fformat=xcard ICOBOL xCard format. Variable format with right margin set at column 255 instead of 250. @item -fformat=xopen X/Open Free-form format. The program-text area may start in column 1 unless an indicator is present, and lines may contain up to 255 characters. Indicator for debugging lines is @samp{D } (D followed by a space) instead of @samp{D} or @samp{d}. @item -fformat=crt ICOBOL Free-form format (CRT). Similar to the X/Open format above, with lines containing up to 320 characters and single-character debugging line indicators (@samp{D} or @samp{d}). @item -fformat=terminal ACUCOBOL-GT Terminal format. Similar to the CRT format above, with indicator for debugging lines being @samp{\D} instead of @samp{D} or @samp{d}. This format is mostly compatible with VAX COBOL terminal source format. @item -fformat=cobolx COBOLX format. This format is similar to the CRT format above, except that the indicator area is always present in column 1; the program-text area starts in column 2 and extends up to the end of the record. Lines may contain up to 255 characters. @item -fformat=auto Autodetection of format. The compiler will use the first line of the file to detect whether the file is in fixed format (with a correct indicator at position 7), or in free format. @end table Note that with source formats @code{XOPEN}, @code{CRT}, @code{TERMINAL}, and @code{COBOLX}, missing spaces are not inserted within continued alphanumeric literals that are truncated before the right margin. @emph
Re: [PATCH] cobol: Fix up update_web_docs_git for COBOL [PR119227]
Note that there's still an index.html missing, compare https://gcc.gnu.org/onlinedocs/gfortran/index.html and https://gcc.gnu.org/onlinedocs/gcobol/index.html (not found) Simon
Re: [PATCH] cobol: Avoid conflict with OVERFLOW in system headers [PR119217]
Shouldn't that be --- a/gcc/cobol/parse.y +++ b/gcc/cobol/parse.y @@ -337,7 +337,7 @@ %token INVALID %token NUMBER NEGATIVE %token NUMSTR"numeric literal" -%token OVERFLOW +%token OVERFLOW_kw "OVERFLOW" %token COMPUTATIONAL ? Otherwise bison syntax messages will use OVERFLOW_kw instead of the "real" name - if this _kw is done with other tokens in parse.y I think it should similarly specify the token "message identifier". Simon
Re: [PATCH] cobol: Allow for undefined NAME_MAX [PR119217]
> You're right: seems to be all about COBOL function names. No idea what value is appropriate/required here, though. if this is about COBOL internal function names: ISO says and GnuCOBOL therefore defines /* Maximum length of COBOL words */ #define COB_MAX_WORDLEN 63 Note that _externalized_ words (like function names) _can_ be longer (that limit is implementor-defined). If that is about the names of the functions generated by gcobol - you could have a look at the externalized symbols of libgcobol. For both 2 + 3 I think 255 is much more than enough, 127 is likely also enough... Kind regards, Simon
ping on fixes for cobol.1 + gcobc
with GCC 15.1 in sight... ping on gcobc wrapper fixes and additions: https://gcc.gnu.org/pipermail/gcc-patches/2025-April/680218.html (note: obviously it would be good if -Wall [1] would work (the "global" PR for -Wall was postponed to GCC16, so possibly add to gcobol as intermediate?), then the special handling for it can be dropped; just recognized: -v (verbose) also does not work for gcobol. cobol.1 fix: https://gcc.gnu.org/pipermail/gcc-patches/2025-April/680557.html Kind regards, Simon [1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119329
Re: ping: COBOL: testsuite and running NIST85
Am 07.04.2025 um 09:30 schrieb Richard Biener: On Mon, Apr 7, 2025 at 9:00 AM Simon Sobisch wrote: My question stands on integrating COBOLworx' UAT as-is for now (Copyright is all on FSF; built automatically [it is autoconf, which is a requirement for VCS checkouts], possibly also hooked into the current test target) - with the goal to get rid of UAT later (next GCC version, not GCC 15). There's also the question about integrating NIST into GCC upstream - that is a subfolder and would only be executed upon explicit call by maintainers (newcob.val / newcob.val.gz may be either included in VCS or even downloaded manually...). As I repeatedly said I'd welcome a test harness like Ada ACATS for running the NIST testsuite plus a contrib/download_cobol_nist script that downloads the NIST file and prepares it for use. I'd suggest to, similar as with ACATS, have a separate make target for testing (but still invoked with make check, when present). Sounds good. As it is a single curl operation that can also be done with make in that subfolder, do we still need a separate download script? I understand that in any case the test harness would check for the newcob.val file existing (builddir only is fine, right?), and if it is, then execute a `${MAKE} -C builddir/...test.../nist). If Jim doesn't find the time to do this (please respond on this), I can prepare a patch (contributing mostly COBOLworx' work for that setup and config). As for UAT, I understand it's work in progress to get that converted to dejagnu? It is, but full UAT will take weeks, if not months, as far as I've understood Bob. I feel the urge to have his time spend on other things than a conversion (which _does_ provide additional benefit like testing with more configurations and be better included in the rest of GCC's tests), as GCC15 is near and the amount of things to don don't get less). But whatever you guys decide will happen, I mostly wanted to raise my concern. With UAT, gcobol would have MUCH more test coverage directly for everyone, with NIST developers would have the chance to run "what is not disabled" from that testsuite for bigger changes like the FLOAT_128/libmath adjustment and when working on a new target. Both parts are already in the COBOLworx repo and work, can be used directly to check for regressions and the move from UAT to dejagnu can still be done after the increasing pile of bugs (which, as a COBOL programmer I partly find quite severe) and possibly some feature requests (especially around huge codegen) are taken care by the "rare resources" Bob and Jim. Concerning NIST: please take care to not get on the same low level like COBOL-IT and others, claiming gcobol passes NIST - it doesn't (no current compiler does pass all modules, and I think GnuCOBOL is the single one that nearly passes everything [and is able to at least parse the parts that are disabled - around the COMMUNICATION module which was obsolete in COBOL85 and was kind of resurrected by COBOL2023's Message Control System [MCS]). Simon
ping: COBOL: testsuite and running NIST85
My question stands on integrating COBOLworx' UAT as-is for now (Copyright is all on FSF; built automatically [it is autoconf, which is a requirement for VCS checkouts], possibly also hooked into the current test target) - with the goal to get rid of UAT later (next GCC version, not GCC 15). There's also the question about integrating NIST into GCC upstream - that is a subfolder and would only be executed upon explicit call by maintainers (newcob.val / newcob.val.gz may be either included in VCS or even downloaded manually...). With UAT, gcobol would have MUCH more test coverage directly for everyone, with NIST developers would have the chance to run "what is not disabled" from that testsuite for bigger changes like the FLOAT_128/libmath adjustment and when working on a new target. Both parts are already in the COBOLworx repo and work, can be used directly to check for regressions and the move from UAT to dejagnu can still be done after the increasing pile of bugs (which, as a COBOL programmer I partly find quite severe) and possibly some feature requests (especially around huge codegen) are taken care by the "rare resources" Bob and Jim. Concerning NIST: please take care to not get on the same low level like COBOL-IT and others, claiming gcobol passes NIST - it doesn't (no current compiler does pass all modules, and I think GnuCOBOL is the single one that nearly passes everything [and is able to at least parse the parts that are disabled - around the COMMUNICATION module which was obsolete in COBOL85 and was kind of resurrected by COBOL2023's Message Control System [MCS]). Simon
Re: ping: COBOL: testsuite and running NIST85
Am 07.04.2025 um 09:36 schrieb Jakub Jelinek: On Mon, Apr 07, 2025 at 09:30:59AM +0200, Richard Biener wrote: On Mon, Apr 7, 2025 at 9:00 AM Simon Sobisch wrote: My question stands on integrating COBOLworx' UAT as-is for now (Copyright is all on FSF; built automatically [it is autoconf, which is a requirement for VCS checkouts], possibly also hooked into the current test target) - with the goal to get rid of UAT later (next GCC version, not GCC 15). There's also the question about integrating NIST into GCC upstream - that is a subfolder and would only be executed upon explicit call by maintainers (newcob.val / newcob.val.gz may be either included in VCS or even downloaded manually...). As I repeatedly said I'd welcome a test harness like Ada ACATS for running the NIST testsuite plus a contrib/download_cobol_nist script that downloads the NIST file and prepares it for use. I'd suggest to, similar as with ACATS, have a separate make target for testing (but still invoked with make check, when present). But it would be much better if the harness for NIST testing was in dejagnu rather than anything else, only that can handle easily cross-compilations with target boards, parallelization respecting make job reserve, seamless result integration. My understanding has been NIST is a single file from which some tool needs to dig up individual testcases (tcl string support should be able to deal with that), figuring out what options etc. to pass search test and from somewhere find out the expected output for each test. Jakub The source is available as "newcob.val". From there a COBOL program EXEC85 is extracted, which is then to be compiled. This extracted COBOL program is to be run to extract the requested NIST modules for test with the given configuration that it reads from a file. These modules contain more COBOL sources which are to be compiled and run, which produce a test report each. A final step is then to compare the results with the expectation. I don't know enough about dejagnu to say if/how this may do several parts (it should start to compile EXEC85 and then use that for its configurations to extract the modules and run those - or you just compile that and run it for extraction once - via make - then run the modules multiple times using dejagnu). The GnuCOBOL driver for NIST is written in perl, it "only" handles the part of running the module tests (starting from compilation, partially with different options as requested for some modules [like the DB one gcc cobol does not test currently], executes them, partially checks compiler messages - where the compiler's ability to "flag" something is to be tested, checks the log files to gather the number of single tests within each program and counting pass/fail [or compile error up front], recording the execution time, and finally creates an "overview" report.log and execution time for each module, which is then finally compared with plain diff + make). As I understood it, COBOLworx' has put this all into make (and dropped part of the things that the gnucobol driver does). Clone or look at https://gitlab.cobolworx.com/COBOLworx/gcc-cobol/-/tree/parser/gcc/cobol/nist to find out what COBOLworx did exactly and https://sourceforge.net/p/gnucobol/code/HEAD/tree/branches/gnucobol-3.x/tests/cobol85/ [1] to find out what GC does. Simon [1] or https://github.com/OCamlPro/gnucobol/tree/gcos4gnucobol-3.x/tests/cobol85, if you prefer a git mirror
Re: [PATCH] cobol: Fix up cobol/{charmaps,valconv}.cc rules
+cobol/charmaps.cc cobol/valconv.cc: cobol/%.cc: $(LIB_SOURCE)/%.cc + -l='ec\|common-defs\|io\|gcobolio\|libgcobol\|gfileio\|charmaps'; \ + l=$$l'\|valconv\|exceptl'; \ + sed -e '/^#include/s,"\('$$l'\)\.h","../../libgcobol/\1.h",' $^ > $@ The proposed rule is much better than the old one - but is there a technical reason to not just add -I ../../libgcobol or, possibly better, -I $(LIB_SOURCE) to appropriate CPPFLAGS? Simon
Re: [PATCH] cobol: Fix up cobol/{charmaps,valconv}.cc rules
Thanks, I agree that the explicit include of library headers from frontend should include the library folder explicitly.
Re: COBOL: Implementation of STOP RUN / GOBACK
Am 20.03.2025 um 21:50 schrieb James K. Lowden: On Mar 13, 2025, at 8:04 AM, Simon Sobisch wrote: exit() allows us to "pass to the operating system" directly; but it doesn't directly say "success" or "fail". Obviously the statements STOP RUN WITH NORMAL STATUS 41 and STOP RUN ERROR 41 Should have a different result for the operating system. Or, obviously not. For OSes I'm familiar with, there is no *definition* of success/failure. There's just convention: 0 is success and nonzero failure. Even that is honored in the breach, see diff(1). IMO unless the OS defines success/failure outside the value of the exit status value (above, 41), the COBOL compiler cannot supply meaning to STOP RUN NORMAL or ERROR. It has no meaning in COBOL because it has no meaning outside COBOL. By that reasoning, the two statements above both return 41 because there is no way to say more. It is for the caller to decide what to do. I do not think -41 is an option; the compiler should not make arbitrary changes to the user's data. It is temping to raise(SIG_TERM) for error, but again the 41 is lost. STOP RUN WITH ERROR "Don't do that, Jon!" When no numeric value is supplied, IMO: • STOP RUN WITH NORMAL STATUS becomes exit(EXIT_SUCCESS) • STOP RUN WITH ERROR becomes exit(EXIT_FAILURE) That satisfies the Principle of Least Astonishment. BTW those values are defined by C, not POSIX. --jkl I agree that this could be a reasonable approach: * STOP RUN WITH NORMAL STATUS becomes exit(EXIT_SUCCESS) * STOP RUN WITH ERROR becomes exit(EXIT_FAILURE) * Any text given goes to an internal DISPLAY (_possibly_ WITH ERROR doing a DISPLAY UPON SYSERR) If I'd not now that "some heavy business applications" actually pass the error using specific values (one for deadlock, another for general db issues, one for logic issues, ...) I'd say "screw the numbers - just DISPLAY them". But a combined option would be possible as well: * output a text of "STOP ... WITH ERROR" is then noted by still doing exit(number_given_if_none_then_EXIT_FAILURE), but always after an internal DISPLAY "STOP WITH ERROR[ nr][: message]" UPON SYSERR" * output a text of "STOP ... WITH NORMAL STATUS" is then noted by still doing exit(number_given_if_none_then_EXIT_SUCCESS), and (possibly only in case of a text or number given) an internal DISPLAY "STOP WITH NORMAL STATUS[ nr][: message]" UPON SYSOUT" Opinions? Simon
cobol: default iconv encoding (was: [PATCH] cobol: Address some iconv issues.)
> Secondly, using Windows code page 1252 as a default seems overly restrictive. >-static const char standard_internal[] = "CP1252//"; >+static const char standard_internal[] = >+#if __APPLE__ >+"ISO8859-1"; >+#else >+"CP1252//"; >+#endif I'd highly suggest to go with ISO8859-15 in general (=drop that conditional compile altogether), I think that's very widely available and covers the EUR sign as well. Simon
COBOL constant compile-time expressions and numeric literals (was: [PATCH][RFC] [cobol] change cbl_field_data_t::etc_t::value from _Float128 to tree)
> Section 8.3.3.3 of the ISO spec defines both fixed- and floating-point > numeric literals. > > "A fixed-point numeric literal is a character-string whose > characters are selected from the digits '0' through '9', the plus > sign, the minus sign, and the decimal point. The implementor shall > allow for fixed-point numeric literals of 1 through 31 digits in > length." > > "A floating-point numeric literal is signified by an 'E' between two > fixed-point literals, where the exponent may have no more than 4 > digits, and no decimal point." > > What exactly that implies for constant compile-time expressions, now > that fixed-point computation is available, I'm not sure. I just want > to clarify what ISO says, to avoid any confusion. There are more rules for compile-time arithmetic expressions (7.3.6 7.3.8). Most important "compile-time arithmetic expressions" in COBOL is _only_ referenced to in the compiler directives (=that is not related to constant-folding or similar) > 1) Compile-time arithmetic expressions shall be formed in accordance with 8.8.1, Arithmetic expressions, with the following > exceptions: > a) The exponentiation operator shall not be specified. > b) All operands shall be fixed-point numeric literals or arithmetic > expressions in which all operands are fixed-point numeric literals. > c) The expression shall be specified in such a way that a division > by zero cannot occur. > > 2) The implementor shall define and document any rules restricting the >precision and/or magnitude and/or range of permissible values for >the intermediate results needed to evaluate the arithmetic >expression. They shall also document which intermediate rounding >method is used, if applicable. so for the compiling directing facility (CDF) there is no floating-point at all and you can just decide (as long as it is documented) the precision / rounding [or truncation] that applies. For any "constant expressions" that you find in the real "code" - they have to work per the program specific rules (default, that the user may override). Simon
Re: Re: [PATCH] Add COBOL to htdocs/gcc-15/changes.html.
Please adjust the text, there are too many people claiming NIST passing, but in case of gcobol - that's not true even if you ignore modules that are now obsolete or archaic. See my mail on how to run NIST with the GnuCOBOL setup - still too many failures. I hope that statement may be true in a later GCC15 as part of fixes, but until then please move that to reference as you did with the current standard, not to "pass". Note: you may want to include a link to the gcobol manual which is also online at GCC space. Simon
libiberty: Would it be reasonable to add support for GnuCOBOL function name demangling?
Hi fellow hackers, first of all: I'm not sure if this is the correct mailing list for this question, but I did not found a separate one and gnu.org/software/libiberty redirects to https://gcc.gnu.org/onlinedocs/libiberty.pdf - so I'm here. If there's a better place for this: please drop a note. I've never "worked" with libiberty directly but am sure I'm using it quite regularly with various tools including GDB and valgrind. Therefore I currently cannot send a patch for the function name demangling, but if this is a reasonable thing to add I'd like to work on this with someone. As noted: the first question is: is it reasonable to add support for GnuCOBOL? * How would the demangler know it is to be called? Just "best match" (GnuCOBOL modules always have some symbols in it which should be available if there is any debugging information in, if that helps)? * Giving the work of gcc-cobol which was discussed on this mailing list some months ago (not sure about its current state) there possibly will be a COBOL support be "integrated" - with possibly different name mangling. But still - GnuCOBOL is used "in the wild" (for production environments) since years (and will be for many years to come, both based on GCC and with other compilers) and the name mangling rules did not change. A second question would be: Is there anyone who would be willing to work on this with me? Where would "we" or I start? Thank you for taking the time to read and possibly answer, Simon Sobisch Maintainer GnuCOBOL OpenPGP_0x13E96B53C005604E.asc Description: OpenPGP public key OpenPGP_signature Description: OpenPGP digital signature
Re: libiberty: Would it be reasonable to add support for GnuCOBOL function name demangling?
Am 27.05.22 um 20:31 schrieb Eric Gallager: On Fri, May 27, 2022 at 3:17 AM Simon Sobisch via Gcc-patches wrote: [...] the first question is: is it reasonable to add support for GnuCOBOL? * How would the demangler know it is to be called? Just "best match" (GnuCOBOL modules always have some symbols in it which should be available if there is any debugging information in, if that helps)? * Giving the work of gcc-cobol which was discussed on this mailing list some months ago (not sure about its current state) there possibly will be a COBOL support be "integrated" - with possibly different name mangling. But still - GnuCOBOL is used "in the wild" (for production environments) since years (and will be for many years to come, both based on GCC and with other compilers) and the name mangling rules did not change. If the plan is to integrate GnuCOBOL into trunk, then I'd say adding demangling support for it to libiberty would not only be reasonable, but also a necessary prerequisite for merging the rest of it. Just to ensure that there aren't confusions: Nobody intends to integrate GnuCOBOL [0] into gcc; but it would be important for gcobol for being integrated into gcc to succeed. GnuCOBOL (formerly OpenCOBOL) is a project which translates COBOL to intermediate C (mostly consisting of calls to functions in the GnuCOBOL runtime library libcob), then executes the "native" / system C compiler. It is very mature and used a lot; we _suggest_ to use GCC but also work with other free and nonfree compilers on free and nonfree systems. gcobol [1][2] (I've also seen it referenced as gcc-cobol) is an actual gcc frontend, so translates into gcc intermediate format. As far as I know, the plans are to both provide a usable working COBOL compiler and reach a state for integration until 2023. It possibly will use a very small but important part of libcob (at least if available) to provide support of a COBOL-native way to read/write data. When it comes up to the integration phase it _could_ be considered to integrate only those parts as-is (so effectively forking libcob to glibcob), as both GCC and GnuCOBOL are FSF-Copyrighted - or add it as an optional dependency (a lot of COBOL users don't use that 'old' way of accessing data and moved to EXEC SQL preprocessors instead). But as GnuCOBOL maintainer my question here was about the GnuCOBOL name mangling. I've now learned that as there isn't an explicit prefix like Z_ the de-mangling will be an "upon request", and as far as current responses were it seems like an reasonable approach and "patches to add that are likely to be accepted" (otherwise I won't start, because obviously there is always something to do on the GnuCOBOL side, too). Simon [0]: http://www.gnu.org/software/gnucobol [1]: https://gcc.gnu.org/pipermail/gcc/2022-March/238408.html [2]: https://git.symas.net/cobolworx/gcc-cobol/-/tree/master+cobol/gcc/cobol