Thank you for your advice.
> By the way, I don't understand ChangeLog entries like this:
>
> 2016-03-23 Masamichi Hosoda
> * doc/texinfo.tex (\pdfgettoks, \pdfaddtokens, \adn, \poptoks,
> \maketoks, \makelink, \pdflink, \done): New Macro.
> Add XeTeX PD
On 26 March 2016 at 00:50, Karl Berry wrote:
> I really don't understand the code in \pdfgettoks in texinfo.tex.
>
> Not sure if this is still relevant, but it's not just about page
> numbers, is it? I thought that crazy stuff was at least partly about
> handling brace characters that should
I really don't understand the code in \pdfgettoks in texinfo.tex.
Not sure if this is still relevant, but it's not just about page
numbers, is it? I thought that crazy stuff was at least partly about
handling brace characters that should show up literally in the pdf
bookmarks. I believe Han
>> I've made a patch that improves XeTeX PDF support.
>> This patch adds PDF table of contents page number link support.
>>
>> Would you commit it?
>> Or, may I commit it?
>>
>> Additionaly,
>> I'll make @email and @xref, \urefurlonlylinktrue link support for XeTeX.
>
> Please do, if you haven't c
On 22 March 2016 at 15:29, Masamichi HOSODA wrote:
> I've made a patch that improves XeTeX PDF support.
> This patch adds PDF table of contents page number link support.
I really don't understand the code in \pdfgettoks in texinfo.tex. Does
anyone know what it's doing? It looks like it is taking
On 22 March 2016 at 15:29, Masamichi HOSODA wrote:
> I've made a patch that improves XeTeX PDF support.
> This patch adds PDF table of contents page number link support.
>
> Would you commit it?
> Or, may I commit it?
>
> Additionaly,
> I'll make @email and @xref, \urefurlonlylinktrue link support
I've made a patch that improves XeTeX PDF support.
This patch adds PDF table of contents page number link support.
Would you commit it?
Or, may I commit it?
Additionaly,
I'll make @email and @xref, \urefurlonlylinktrue link support for XeTeX.
ChangeLog:
XeTeX PDF TOC page number link support
2
>> but I want to understand what the point of the \edef was in the first
>> place, and what the point was of changing the catcode of backslash,
>> and test whether this is still necessary. Hopefully I'll get to this
>> soon.
>
> I've committed a new change that should make special Unicode
> charac
On 20 March 2016 at 19:43, Gavin Smith wrote:
>
> but I want to understand what the point of the \edef was in the first
> place, and what the point was of changing the catcode of backslash,
> and test whether this is still necessary. Hopefully I'll get to this
> soon.
I've committed a new change
> Thanks for working on this. I'd like to avoid going back to the way it
> was done before if possible because this means that all the
> definitions of the Unicode characters are run through every time a
> macro is used. The following patch seems to give good results:
>
> Index: doc/texinfo.tex
>
On 20 March 2016 at 15:20, Masamichi HOSODA wrote:
>> I've noticed an issue of texinfo.tex ver. 2016-03-06.18.
>> It can not compile the attached texi file.
>>
> [snip...]
>>
>> All the following engines fail.
>> LuaTeX 0.89.2
>> XeTeX 0.2
>> XeTeX 0.5
>> pdfTeX 1.40.16
>>
>> With texinfo.
On 14 March 2016 at 13:41, Masamichi HOSODA wrote:
> I've noticed that the definition of \ifpassthroughchars is duplicated.
> Here is the patch for fix it.
Thank you, fixed. Apologies for the delay in responding.
> I've noticed an issue of texinfo.tex ver. 2016-03-06.18.
> It can not compile the attached texi file.
>
[snip...]
>
> All the following engines fail.
> LuaTeX 0.89.2
> XeTeX 0.2
> XeTeX 0.5
> pdfTeX 1.40.16
>
> With texinfo.tex ver. 2016-03-05.11, they work fine.
Here is a patch that f
> I've finally finished this. It does appear a bit faster as well as
> being simpler. Where it could potentially break is chapter names with
> non-ASCII characters, and index entries with non-ASCII characters, as
> well as macros using non-ASCII characters (either in the macro
> definition or in th
>> I\\\'ve finally finished this. It does appear a bit faster as well as
>> being simpler. Where it could potentially break is chapter names with
>> non-ASCII characters, and index entries with non-ASCII characters, as
>> well as macros using non-ASCII characters (either in the macro
>> definition
> I\\\'ve finally finished this. It does appear a bit faster as well as
> being simpler. Where it could potentially break is chapter names with
> non-ASCII characters, and index entries with non-ASCII characters, as
> well as macros using non-ASCII characters (either in the macro
> definition or in
On 7 February 2016 at 16:23, Gavin Smith wrote:
> In a patch that I'm still working on, I have a definition like
>
> +\def\gdefchar#1#2{%
> +\gdef#1{%
> + \ifpassthroughchars
> + \string#1%
> + \else
> + #2%
> + \fi
> +}}
>
> and then e.g. in \latonechardefs
>
> \def\latonechardefs{
On 8 February 2016 at 15:20, Masamichi HOSODA wrote:
>> I hope it's clear what I'm trying to do here: instead of redefining,
>> change the value of a conditional that is used within the macro. I
>> thought that something similar might be possible with XeTeX's native
>> Unicode.
>
> Thank you for y
> I hope it's clear what I'm trying to do here: instead of redefining,
> change the value of a conditional that is used within the macro. I
> thought that something similar might be possible with XeTeX's native
> Unicode.
Thank you for your advice.
I've made a patch.
ChangeLog:
Native Unicode re
On 7 February 2016 at 15:53, Masamichi HOSODA wrote:
>>> I'm not sure if this is correct: shouldn't the conditional be inside a
>>> single definition, instead of two definitions (starting \gdef~ and
>>> \edef~) inside the conditional?
>>
>> Sorry.
>> It's comletely incorrect.
>> It can not swith t
>> I'm not sure if this is correct: shouldn't the conditional be inside a
>> single definition, instead of two definitions (starting \gdef~ and
>> \edef~) inside the conditional?
>
> Sorry.
> It's comletely incorrect.
> It can not swith to ``pass-through''.
>
> Even if to use \gdef for ``pass-thr
>> \def\DeclareUnicodeCharacterNative#1#2{%
>>\catcode"#1=\active
>> - \begingroup
>> -\uccode`\~="#1\relax
>> -\uppercase{\gdef~}{#2}%
>> - \endgroup}
>> + \ifnativeunicodereplace
>> +\begingroup
>> + \uccode`\~="#1\relax
>> + \uppercase{\gdef~}{#2}%
>> +\endgroup
On 7 February 2016 at 14:20, Masamichi HOSODA wrote:
> \def\DeclareUnicodeCharacterNative#1#2{%
>\catcode"#1=\active
> - \begingroup
> -\uccode`\~="#1\relax
> -\uppercase{\gdef~}{#2}%
> - \endgroup}
> + \ifnativeunicodereplace
> +\begingroup
> + \uccode`\~="#1\relax
> +
On 7 February 2016 at 14:20, Masamichi HOSODA wrote:
>> I have a different suggestion for fixing this issue: execute
>> \unicodechardefs only once in each run, and make the expansion of each
>> character use a condition. The value of the condition can be changed
>> to control what the characters d
> I have a different suggestion for fixing this issue: execute
> \unicodechardefs only once in each run, and make the expansion of each
> character use a condition. The value of the condition can be changed
> to control what the characters do without redefining all of the
> characters.
>
> The sam
On 31 January 2016 at 13:25, Masamichi HOSODA wrote:
>> If the empty lines are really the cause, I agree that it deserves a
>> separate commit since it doesn't seem to be related to the encoding
>> problem.
>
> The issue occurs in native Unicode only.
>
> If native Unicode is enabled,
> \nativeuni
This patch fixes ``reference has extra space in native Unicode'' issue.
ChangeLog:
Remove references extra space for native Unicode
2016-02-XX Masamichi Hosoda
* doc/texinfo.tex (\unicodechardefs):
Remove references extra space for native Unicode.
--- texinfo.tex.org 2016-02-
I've fixed two issues for my native Unicode patch.
This mail is attached one of the two patches.
This patch fixes ``dotless i'' issue.
ChangeLog:
Add native Unicode support for XeTeX and LuaTex
2016-02-XX Masamichi Hosoda
* doc/texinfo.tex:
Add native Unicode support for XeTe
>> Have you ever got the CJK characters to work in a Texinfo file with
>> XeTeX or LuaTeX? If so, maybe we should conditionally load the fonts
>> that you got to work. Can you satisfactorily typeset Japanese text
>> with XeTeX without the use of LaTeX packages? If not, it very likely
>> won't be pr
>>> I noticed page breaking issue in my patch.
>>> I've fixed it.
>
> Please provide a sample to reproduce the issue.
I've attached it.
>> The empty lines in \utfeightchardefs? I'll commit that separately.
>
> If the empty lines are really the cause, I agree that it deserves a
> separate commit
>> I noticed page breaking issue in my patch.
>> I've fixed it.
Please provide a sample to reproduce the issue.
> The empty lines in \utfeightchardefs? I'll commit that separately.
If the empty lines are really the cause, I agree that it deserves a
separate commit since it doesn't seem to be re
On 28 January 2016 at 13:39, Masamichi HOSODA wrote:
> I noticed page breaking issue in my patch.
> I've fixed it.
The empty lines in \utfeightchardefs? I'll commit that separately.
I noticed page breaking issue in my patch.
I've fixed it.
--- texinfo.tex.org 2016-01-21 23:04:22.405562200 +0900
+++ texinfo.tex 2016-01-28 22:23:50.283561700 +0900
@@ -9433,43 +9433,68 @@
\global\righthyphenmin = #3\relax
}
-% Get input by bytes instead of by UTF-8 codepoints for XeTeX and
>> In XeTeX and LuaTeX, is "@documentencoding ISO-8859-1" support required?
>> If so, I'll improve the patch.
>> It will use byte-wise input when "@documentencoding ISO-8859-1" is used.
>>
>> However, if you want ISO-8859-1,
>> you can use pdfTeX instead of XeTeX/LuaTex or you can convert to UTF-8,
On 23 January 2016 at 03:06, Masamichi HOSODA wrote:
> In XeTeX and LuaTeX, is "@documentencoding ISO-8859-1" support required?
> If so, I'll improve the patch.
> It will use byte-wise input when "@documentencoding ISO-8859-1" is used.
>
> However, if you want ISO-8859-1,
> you can use pdfTeX inst
>> Thank you for your comments.
>> I've updated the patch.
>>
>> I want the following.
>> UTF-8 auxiliary file.
>> Handling Unicode filename (image files and include files).
>> Handling Unicode PDF bookmark strings.
>
> Thanks for working on this. I've had a look at the most recent patch,
>
On 18 January 2016 at 14:12, Masamichi HOSODA wrote:
>> If I understand correctly, you are changing the category codes of the
>> Unicode characters when writing out to an auxiliary file, but only for
>> those Unicode characters that are defined. This leads the Unicode
>> character to be written ou
> I think it misses some percent signs, e.g.
>
> \def\utfeightchardefs{% <- here
> \let\DeclareUnicodeCharacter\DeclareUnicodeCharacterUTFviii
> \unicodechardefs
> }
>
> Maybe they aren't necessary, but I would add them for consistency.
Thank you for your advice.
Here is fix
> I've improved native Unicode replacing patch.
I think it misses some percent signs, e.g.
\def\utfeightchardefs{% <- here
\let\DeclareUnicodeCharacter\DeclareUnicodeCharacterUTFviii
\unicodechardefs
}
Maybe they aren't necessary, but I would add them for consistency.
> Thank you for your comments.
> I've updated the patch.
>
> I want the following.
> UTF-8 auxiliary file.
> Handling Unicode filename (image files and include files).
> Handling Unicode PDF bookmark strings.
>
> For this purpose, I used the method that changes catcode.
> The patch that is
> If I understand correctly, you are changing the category codes of the
> Unicode characters when writing out to an auxiliary file, but only for
> those Unicode characters that are defined. This leads the Unicode
> character to be written out as a UTF-8 sequence. For the regular
> output, the defin
On 17 January 2016 at 15:27, Masamichi HOSODA wrote:
> I have another solution.
> The sample patch is attached to this mail.
>
> Unicode fonts are not required. (default Computer Modern is used.)
> Byte wise input is *NOT* used.
> Unicode glyphs (U+00FC etc.) can be used.
>
> How about this?
If
> Instead, I would like to have the ucharclasses style file (for XeTeX)
> ported to texinfo (also part of TeXLive, BTW).
>
> https://github.com/Pomax/ucharclasses
>
> It should also be ported to luatex so that Unicode blocks
> automatically access associated fonts.
>
> But this is the future.
I know there are "virtual fonts" in the TeX world
TeX virtual fonts (that is, .vf files) are irrelevant to the current
discussion. -k
>> Well, they *could* be. We could choose a font with CJK support and
>> make the definitions in texinfo.tex just as we define existing
>> chars. In principle it is possible to make definitions for any and
>> all Unicode characters in texinfo.tex. -k
>
> I believe there would be complications.
On 15 January 2016 at 23:48, Karl Berry wrote:
> Well, they *could* be. We could choose a font with CJK support and make
> the definitions in texinfo.tex just as we define existing chars. In
> principle it is possible to make definitions for any and all Unicode
> characters in texinfo.tex. -k
>
On 16 January 2016 at 08:48, Masamichi HOSODA wrote:
> These packages can be set independently Japanese fonts and alphabetic fonts.
>
> LuaTeX-ja
> https://osdn.jp/projects/luatex-ja/wiki/FrontPage%28en%29
>
> ZXjatype
> http://www.ctan.org/pkg/zxjatype
>
> I think it is *possible* to set the font
> > For example, if you want to use Japanese characters,
> > I think that it is possible to set the Japanese font in txi-ja.tex.
>
> To reiterate: as far as I know, it is not possible to set the font for
> Japanese only in texinfo[.tex]. Thus the ja font, wherever it is
> specified, would
> include every single Unicode character?
Masamichi - Gavin means "all". The vast majority of fonts cover basic
European. That's not the issue.
Gavin - there are a few fonts that "aim to" include every character,
though none actually does. Here's a page with some basic info:
http://unix.st
> For example, if you want to use Japanese characters,
> I think that it is possible to set the Japanese font in txi-ja.tex.
To reiterate: as far as I know, it is not possible to set the font for
Japanese only in texinfo[.tex]. Thus the ja font, wherever it is
specified, would be used for
On 15 January 2016 at 18:13, Masamichi HOSODA wrote:
>> the following is created in the output auxiliary table of contents file:
>>
>> @numchapentry{f@"ur}{1}{}{1}
>>
>> Without it, it would be
>>
>> @numchapentry{für}{1}{}{1}
>>
>> Do you understand now how changing the active definitions can cha
> the following is created in the output auxiliary table of contents file:
>
> @numchapentry{f@"ur}{1}{}{1}
>
> Without it, it would be
>
> @numchapentry{für}{1}{}{1}
>
> Do you understand now how changing the active definitions can change
> what's written to the output files?
Thank you for yo
On 15 January 2016 at 17:15, Masamichi HOSODA wrote:
>> I think it could be done by changing the active definitions of bytes
>> 128-256 when writing to an auxiliary file to read a single Unicode
>> character and write out an ASCII sequence that represents that
>> character, probably involving the
(something like ``Table of Contents'' broken etc.)
That can be fixed in other ways, without resorting to native UTF-8.
>>>
>>> I agree.
>>
>> In the case of LuaTex, exactly, it can be fixed.
>> In the case of XeTeX, unfortunately,
>> it cannot be fixed if I understand correctly.
On 15 January 2016 at 15:19, Masamichi HOSODA wrote:
>>> (something like ``Table of Contents'' broken etc.)
>>>
>>> That can be fixed in other ways, without resorting to native UTF-8.
>>
>> I agree.
>
> In the case of LuaTex, exactly, it can be fixed.
> In the case of XeTeX, unfortunately,
> i
>> By switching to native UTF-8, the support in texinfo.tex for characters
>> outside the base font is lost, as far as I can see. Yes, you get some
>> characters "for free" (the ones in the lmodern*.otf fonts now being
>> loaded instead of the traditional cm*) but you also lose some characters
>>
On 15 January 2016 at 00:11, Karl Berry wrote:
> it means that you want to use native UTF-8 support in my humble opinion.
>
> Not necessarily. The problem isn't encodings, it's fonts. The two
> things are intimately and fundamentally tied together, and that cannot
> be escaped.
>
> By switch
it means that you want to use native UTF-8 support in my humble opinion.
Not necessarily. The problem isn't encodings, it's fonts. The two
things are intimately and fundamentally tied together, and that cannot
be escaped.
By switching to native UTF-8, the support in texinfo.tex for characte
>> I've created a patch that uses native unicode support of both XeTeX and
>> LuaTex.
>> It works fine in my XeTeX, LuaTeX and pdfTeX environment.
>> Except, LuaTeX create broken PDF bookmark.
>>
>> How about this?
>
> It looks mostly all right. We'd need to wait until we have your
> copyright as
On 11 January 2016 at 16:22, Masamichi HOSODA wrote:
> I've created a patch that uses native unicode support of both XeTeX and
> LuaTex.
> It works fine in my XeTeX, LuaTeX and pdfTeX environment.
> Except, LuaTeX create broken PDF bookmark.
>
> How about this?
It looks mostly all right. We'd ne
>> On the other hands, in XeTeX,
>> it seems that XeTeX does not have something like \XeTeXoutputencoding.
>
> It appears not, from what I could find out.
>
> For now, if you need to use XeTeX, you'd have to avoid any non-ASCII
> characters in anything written to an auxiliary file, e.g. use @"u
>
On 11 January 2016 at 03:47, Masamichi HOSODA wrote:
> Thank you.
> It works for my LuaTeX environment.
>
> On the other hands, in XeTeX,
> it seems that XeTeX does not have something like \XeTeXoutputencoding.
It appears not, from what I could find out.
For now, if you need to use XeTeX, you'd
> Here's the code that worked for me:
>
> local function convert_line_out (line)
> local line_out = ""
> for c in string.utfvalues(line) do
> line_out = line_out .. string.char(c)
> end
> return line_out
> end
>
> callback.register("process_output_buffer", convert_line_out)
>
> Appa
On 10 January 2016 at 19:21, Gavin Smith wrote:
> For LuaTeX the code should be something like
Here's the code that worked for me:
local function convert_line_out (line)
local line_out = ""
for c in string.utfvalues(line) do
line_out = line_out .. string.char(c)
end
return line_out
On 10 January 2016 at 17:59, Masamichi HOSODA wrote:
> In XeTeX and LuaTex, non-ascii chapter name of ``Table of contents''
> is broken.
> In pdfTeX, it is not broken.
For LuaTeX the code should be something like
local function convert_line_out (line)
local line_out = ''
for p, c in unicode.
In XeTeX and LuaTex, non-ascii chapter name of ``Table of contents''
is broken.
In pdfTeX, it is not broken.
Attached files are texi file and screenshots of PDFs.
\input texinfo.tex
@documentencoding UTF-8
@contents
@chapter für
für
@bye
Here's a file that I ran with pdftex and with luatex: both worked.
If this looks right, the code can be moved into texinfo.tex.
>>
>> \ifx\XeTeXrevision\thisisundefined
>> \else
>> \XeTeXinputencoding "bytes"
>> \fi
>>
>> although I haven't been able to test this.
>
> I've tried the at
67 matches
Mail list logo