Re: [Rd] Mac R spinning wheel with Package Manager (PR#14005)

2009-10-15 Thread james
Thanks Simon,

I changed my mirror and the freeze on "Get List" reduced (although  
there was still a spinning pizza).  Attempting an install definitely  
locked things up, but I can tell from my network activity that it was  
downloading and it did go ahead and install.

Thanks, I suppose expected behavior would be something communicating  
the issue and perhaps a progress bar; certainly I think avoiding a  
spinning pizza would be a good think, since that is usually very bad  
news :)

Cheers,
James

On Oct 14, 2009, at 09:32, Simon Urbanek wrote:

> (moving to the proper mailing list: R-SIG-Mac - this is not a bug so  
> far!)
>
> James,
>
> this look like your internet access is stalling R -- it's not really  
> R freezing but your internet. Try using a different mirror and/or  
> check your internet connection.
>
> Cheers,
> Simon
>
>
> On Oct 13, 2009, at 10:45 , ja...@howison.name wrote:
>
>> Full_Name: James Howison
>> Version: 2.9.2
>> OS: Mac OS X 10.5.8
>> Submission from: (NULL) (128.2.222.163)
>>
>>
>> For quite a while now I have been unable to use Package Manager to  
>> install
>> programs.  I get the spinning wheel and long lock ups.  This  
>> happens when I hit,
>> "get list", but that eventually returns, seemingly successfully.   
>> However it
>> also happens when I try to install a package, the whole application  
>> locks up.  I
>> don't see anything in a log file; I took a sample using Activity  
>> Manager and the
>> report after a force quit (which I didn't "send to apple")
>>
>> I'm able to build packages fine using install.packages and CMD  
>> INSTALL
>>
>> This behavior continued with a fresh R install (Oct 13, 2009)
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] allocation of large matrix failing

2005-07-12 Thread James Bullard
Hello, this is probably something silly which I am doing, but I cannot
understand why this allocation is not happening.

Here is a my C code which tries to allocate a list of size  333559, and
then a matrix of size 8*333559
I thought I might be running into memory problems, but R is not even
using that much (I start R with
more memory and it stays constant) Also, I start R as I normally do and
I allocate a matrix of that size
and it returns instantly, so I am inclined to think that this is not a
memory/GC issue, but I submit it
may be.
 

int numHits = seq.GetNumberHits();

Rprintf("numHits:%d\n", numHits);
Rprintf("before allocation...\n");

SEXP oligos, matrix;
PROTECT(oligos = NEW_LIST(numHits));
Rprintf("allocated oligo list...\n");

PROTECT(matrix = NEW_INTEGER(numHits*8));
Rprintf("entering looop...\n");



entering sequence loop.
numHits:333559
before allocation...
allocated oligo list...

It hangs here everytime (never printing "entering loop..." - i have
waited like 10 minutes). If I remove the 8 then it completes.
Essentially I want to allocate a vector of that length and then
dimension it into a matrix, but I cannot see why this does not work.

debian 2.6
R 2.1.0

Thanks as always for any insight.

jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wchar and wstring.

2005-08-26 Thread James Bullard
Hello all, I am writing an R interface to some C++ files which make use
of std::wstring classes for internationalization. Previously (when I
wanted to make R strings from C++ std::strings), I would do something
like this to construct a string in R from the results of the parse.

SET_VECTOR_ELT(vals, i++, mkString(header.GetHeader().c_str()));


However, now the call header.GetHeader().c_str() returns a pointer to an
array of wchar_t's. I was going to use
wcstombs() to convert the wchar_t* to char*, but I wanted to see if
there was a similar function in R for the mkString function which I had
initially used which deals with wchar_ts as opposed to chars. Also,
since I have no experience with the wctombs() function I wanted to ask
if anyone knew if this will handle the internationilzation issues from
within R.


As always thank you for the help. jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Question about SET_LENGTH

2005-08-29 Thread James Bullard
Hello all, thanks for all the help on the other issues. This one should
be relatively straightforward. I have a vector of integers which I
allocate to be the maximal size possible (meaning I'll never see more
than cel.GetNumOutliers, but most likely I'll see less) therefore, I
want to resize the vector, one solution would be to allocate a new
vector then to copy into the new vector. That is what I was going to do
until I saw the SET_LENGTH macro. Does this macro effectively take care
of the memory? Is this an acceptable use of the macro? The code works,
but I dont want any lurking memory problems.


PROTECT(outliers = NEW_INTEGER(cel.GetNumOutliers()));

 if (i_readOutliers != 0) {
 if (noutlier == 0) {
outliers = R_NilValue;
  }
  else if (noutlier < cel.GetNumOutliers()) {
SET_LENGTH(outliers, noutlier);
  }
 }


Thanks as always!

jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wchar and wstring. (followup question)

2005-08-29 Thread James Bullard
Thanks for all of the help with my previous posts. This question might
expose much of my ignorance when it comes to R's memory managment and
the responsibilities of the programmer, but I thought I had better ask
rather than continue in my ignorance.

I use the following code to create a multi-byte string in R from my wide
character string in C.

int str_length;
char* cstr;
   
str_length = cel.GetAlg().size();
cstr = Calloc(str_length, char);
wcstombs(cstr, cel.GetAlg().c_str(), str_length);
SET_STRING_ELT(names, i, mkChar("Algorithm"));
SET_VECTOR_ELT(vals, i++, mkString(cstr));
Free(cstr);

My first question is: do I need the Free? I looked at some of the
examples in main/character.c, but I could not decide whether or not I
needed it. I imagined (I could not find the source for this function)
that mkString made a copy so I thought I would clean up my copy, but if
this is not the case then I would assume the Free would be wrong.

My second question is: It was pointed out to me that it would be more
natural to use this code:

SET_STRING_ELT(vals, i++, mkChar(header.GetHeader().c_str()));

instead of:

SET_VECTOR_ELT(vals, i++, mkString(header.GetHeader().c_str()));

However, the first line creates the following list element in R:



Whereas, I want it to create as the list element:

"Percentile"

Which the second example does correctly. I had previously posted about
this problem and I believe that I was advised to use the second syntax,
but maybe there is a different problem in my code. I am trying to
construct a named list in R where my first line SET_STRING_ELT sets the
name of the list element and the second sets the value where the value
can be an INTEGER, STRING or whatever.

My third question is simply, why is wcrtomb preferred, the example i
based my code of of in main/character.c used  wcstombs.


Thanks again for all of the help.

jim


Prof Brian Ripley wrote:

> On Fri, 26 Aug 2005, James Bullard wrote:
>
>> Hello all, I am writing an R interface to some C++ files which make use
>> of std::wstring classes for internationalization. Previously (when I
>> wanted to make R strings from C++ std::strings), I would do something
>> like this to construct a string in R from the results of the parse.
>>
>> SET_VECTOR_ELT(vals, i++, mkString(header.GetHeader().c_str()));
>
>
> That creates a list of one-element character vectors.  It would be more
> usual to do
>
>   SET_STRING_ELT(vals, i++, mkChar(header.GetHeader().c_str()));
>
>> However, now the call header.GetHeader().c_str() returns a pointer to
>> an array of wchar_t's. I was going to use wcstombs() to convert the
>> wchar_t* to char*, but I wanted to see if there was a similar
>> function in R for the mkString function which I had initially used
>> which deals with wchar_ts as opposed to chars.
>
>
> No (nor an analogue of mkChar).  R uses MBCS and not wchar_t
> internally (and Unix-alike systems do externally).  There is no
> wchar_t internal R type (a much-debated design decision at the time).
>
>> Also, since I have no experience with the wctombs() function I wanted
>> to ask if anyone knew if this will handle the internationilzation
>> issues from within R.
>
>
> Did you mean wcstombs or wctomb (if the latter, wcrtomb is preferred)?
> There are tens of examples in the R sources for you to consult.
>
> Note that not all R platforms support wchar_t, hence this code is
> surrounded by #ifdef SUPPORT_MBCS macros (exported in Rconfig.h for
> package writers).
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R] Debugging R/Fortran in Windows

2005-09-09 Thread James Wettenhall
Thanks very much to Uwe, Duncan and Seth (who replied off the list).

Uwe - That section of the R for Windows FAQ was very useful - thanks! 
Sorry I posted a question involving C/Fortran to R-Help.

Duncan - Thanks for all the useful info.  I've bookmarked the pages you
sent me.

Seth - Thanks for suggesting valgrind.  I tried it out, and it correctly
told me that memory leakage was not the problem (although I didn't believe
it at first).

It turned out that the reason my variables were being overwritten was not
because of memory leakage, but because of my own stupidity - using the
same variable name for a function I was estimating and for my current
estimate of that function.  Sorry I didn't spend more time checking this
myself!

Thanks again for your help,
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R] Debugging R/Fortran in Windows

2005-09-10 Thread James Wettenhall
Duncan,

>> Seth - Thanks for suggesting valgrind.  I tried it out, and it correctly
>> told me that memory leakage was not the problem (although I didn't
>> believe it at first).
>
> Is there a version of valgrind that works in Windows now, or did you do
> this test somewhere else?
>
> Duncan Murdoch

No, I didn't find a version of valgrind that works on Windows.  I used it
on Linux.

Best wishes,
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] vector labels are not permuted properly in a call to sort() (R 2.1)

2005-10-05 Thread David James
Martin Maechler wrote:
> > "AndyL" == Liaw, Andy <[EMAIL PROTECTED]>
> > on Tue, 4 Oct 2005 13:51:11 -0400 writes:
> 
> AndyL> The `problem' is that sort() does not doing anything special when 
> given
> AndyL> a matrix: it only treat it as a vector.  After sorting, it copies
> AndyL> attributes of the original input to the output.  Since dimnames are
> AndyL> attributes, they get copied as is.
> 
> exactly. Thanks Andy.
> 
> And I think users would want this (copying of attributes) in
> many cases; in particular for user-created attributes
> 
> ?sort  really talks about sorting of vectors and factors;
>and it doesn't mention attributes explicitly at all
>{which should probably be improved}.
> 
> One could wonder if R should keep the dim & dimnames
> attributes for arrays and matrices.  
> S-plus (6.2) simply drops them {returning a bare unnames vector}
> and that seems pretty reasonable to me.

This is as described in the Blue book, p.146, "Throwing Away Attributes".

> 
> At least the user would never make the wrong assumptions that
> Greg made about ``matrix sorting''.
> 
> 
> AndyL> Try:
> 
> >> y <- matrix(8:1, 4, 2, dimnames=list(LETTERS[1:4], NULL))
> >> y
> AndyL> [,1] [,2]
> AndyL> A84
> AndyL> B73
> AndyL> C62
> AndyL> D51
> >> sort(y)
> AndyL> [,1] [,2]
> AndyL> A15
> AndyL> B26
> AndyL> C37
> AndyL> D48
> 
> AndyL> Notice the row names stay the same.  I'd argue that this is the 
> correct
> AndyL> behavior.
> 
> AndyL> Andy
> 
> 
> >> From: Greg Finak
> >> 
> >> Not sure if this is the correct forum for this, 
> 
> yes, R-devel is the proper forum.
> {also since this is really a proposal for a change in R ...}
> 
> >> but I've found what I  
> >> would consider to be a potentially serious bug to the 
> >> unsuspecting user.
> >> Given a numeric vector V with class labels in R,  the following calls
> >> 
> >> 1.
> >> > sort(as.matrix(V))
> >> 
> >> and
> >> 
> >> 2.
> >> >as.matrix(sort(V))
> >> 
> >> produce different ouput. The vector is sorted properly in 
> >> both cases,  
> >> but only 2. produces the correct labeling of the vector. The call to  
> >> 1. produces a vector with incorrect labels (not sorted).
> >> 
> >> Code:
> >> >X<-c("A","B","C","D","E","F","G","H")
> >> >Y<-rev(1:8)
> >> >names(Y)<-X
> >> > Y
> >> A B C D E F G H
> >> 8 7 6 5 4 3 2 1
> >> > sort(as.matrix(Y))
> >> [,1]
> >> A1
> >> B2
> >> C3
> >> D4
> >> E5
> >> F6
> >> G7
> >> H8
> >> > as.matrix(sort(Y))
> >> [,1]
> >> H1
> >> G2
> >> F3
> >> E4
> >> D5
> >> C6
> >> B7
> >> A8
> >>
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


--
David

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R-gui] R GUI considerations (was: R, Wine, and multi-threadedness)

2005-10-15 Thread James Wettenhall
Hi Philippe and everyone else,

As you know, I have certainly spent some time thinking about R-GUIs, and
developing some R-Tcl/Tk GUIs - limmaGUI and affylmGUI (available from
Bioconductor).  I have also spent some time wishing we could use a GUI
toolkit with a more modern look and feel.  Hence I have investigated
wxWidgets, and thought that using wxPython might be the easiest way to
interface it from R, but I ran into some technical problems (especially
because I was obsessed with trying to make the interface from R to wx seem
not-too-object-oriented, i.e. I like the idea of 1. create a dialog, 2.
add a button etc. like in Tcl/Tk rather defining a dialog class, then
defining an object which is an instance of that class and then invoking a
show method on that object etc.)

I can't really afford to make R-GUIs much of a priority in my work any
more (so I may not read these philosophical discussions about which GUI
toolkit is best as closely as I used to), but I'm happy to answer any
specific questions about my experience with R-GUI development.  I hope
that doesn't sound too presumptuous of me, but I think that John Fox's
R-Tcl/Tk GUI package (Rcmdr) and mine (limmaGUI and affylmGUI) are some of
the most sophisticated (and most popular in terms of users) attempts to
build a platform-independent GUI interface to a command-line R package (or
packages).  And then there are other really nice GUIs for R which are not
necessarily platform independent - like some of Philippe's SciViews
software, and I recently came across a really nice GUI using GraphApp
(Windows only) for connecting to ODBC - I think it was in Brian Ripley's
RODBC package.

One point I wanted to make here is that there are some people in the R
community who have really great experience to offer from trying to develop
better R GUIs, but who don't necessarily participate on the R-SIG-GUI
mailing list.  For example, I was talking to Jan de Leeuw on the R-SIG-MAC
mailing list and he mentioned that he has done some great work trying to
interface R-wxPython, but that it was difficult maintaining the glue
between the different components.  And there are people in Bioconductor
(e.g. Jianhua Zhang - widgetTools & tkWidgets,  Jeff Gentry -
widgetInvoke) and there are people who have been involved in OmegaHat e.g.
Duncan Temple Lang who all have great experience to offer.

But some of the 'philosophical' ideas that people would like to implement
e.g. interfacing R and wxWidgets 'directly' without going through Python
(e.g. using Cable Swig) (or interfacing R and Tk via C rather than via
Tcl) seem like a massive task, and no-one seems sufficiently motivated to
provide money to employ someone (or multiple people) to do something as
big as that.

Just my thoughts.  Feel free to ignore.
Regards,
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R-gui] R GUI considerations (was: R, Wine, and multi-threadedness)

2005-10-17 Thread James Wettenhall
Hi,

One point I forgot to make last time is that I'm a big fan of prototyping.
 I have almost no experience whatsoever in Java Swing, but there are
plenty of people in the R community who do - e.g. Simon Urbanek and the
JGR team.  In the past, I have had trouble finding any elegant prototypes
(e.g. "Hello, World" applications) interfacing R with Java Swing.

Similarly, I'd love to see "Hello, World" prototypes for R-Qt etc. Of
course for many of those GUI systems originally developed for Linux or
Unix, there is the question of how reliably you can port them to the most
popular operating system in the world and to MacOS X.  I love using GIMP
under Windows, but how many Windows software developers would say that
compiling GTK on Windows (e.g. within an R package) is easy?  That's why I
liked wxPython/wxWidgets because it didn't attempt the difficult task of
porting GTK to Windows - it just uses native Windows widgets on Windows
and GTK widgets on Linux.

I don't want to emphasize my interest in prototypes too strongly though -
I still think that there is a lot of work to be done beyond any prototype.
 Maybe the successful publication of Rcmdr in the Journal of Statistical
Software will give me hope that the academic community will eventually
feel more comfortable asking for money to employ people to do some
challenging GUI-related software development which may not immediately
promise publications within the core research aims of a particular
department/institute.   Or maybe [horrible thought], some business outside
the academic community will provide a very expensive but very good GUI
development system for R.

Best regards,
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R-gui] R GUI considerations (was: R, Wine, and multi-threadedness)

2005-10-19 Thread James Wettenhall
Hi Peter and everyone,

[Hmmm, didn't I say I was not really interested in spending time getting
into these discussions anymore?  Oh well, I can't help myself. ;-) ]

> Why would you want a GUI for something like R in the first
> place? It is a programming language. That is its force. Nothing
> beats the command line.

I think there are many people who would strongly disagree with your
suggestion that there is no point in developing GUIs for R.  But there is
also some ambiguity about what is meant by a GUI - an Interactive
Development Environment (IDE) for developers or a GUI for users who are
highly intelligent, but have no interest whatsoever in ever learning how
to use a command-line interface, whilst still wanting to access some of
the functionality in R/Bioconductor.

Some statisticians / numerical computing specialists work in isolation and
like to advertise that some of their work is very "applied" e.g. they are
working on a project which will save the world or cure cancer or whatever
[sorry for the exaggeration] but this is a natural way for them to market
the importance of their field of research and feel good about themselves.

On the otherhand, there are people like the bioinformatics group I work in
who are a very successful research group, partly because we don't work in
isolation.  Instead we collaborate very closely with scientists from other
fields (i.e. biomedical scientists), but there is an extreme danger here
of being used as a service group (like I.T. support) by the biomedical
scientists who don't appreciate how much work is involved in computer
programming, statistics etc.  So one solution is to use a language like R,
with the philosophy "users become developers", i.e. rather than having to
learn an intimidating hierarchy of 100's of classes in some object
oriented language [OK, I'm exaggerating here], the user can begin using R
quite gently (but still do some powerful statistical calculations) and
then gradually become more advanced.  Now some of the (extremely
intelligent) biologists we collaborate with are very fearful of getting
started in a command-line interface, so they keep asking us to do mundane
things for them which are not going to lead to any research publications
for us - i.e. we feel like we are just being used as I.T. support.  So by
providing a GUI to them, getting started in R is less intimidating for
them, so then we can hopefully spend less time doing mundane numerical
computing tasks for our collaborators and have more time to do our own
serious research.  And we can even publish our work on developing GUIs
which we have - just a short article in Bioinformatics OUP so far - and
John Fox has published a full-length article on Rcmdr in the Journal of
Statistical Software - great stuff!

Does that make sense?
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] calling fortran from C

2005-10-20 Thread James Bullard

Hello, I had a question about calling some of R's fortran routines from C.  
Specifically, I would like to call: dqrfit from some C code which will be  
bundled as an R package. I was hoping someone knew of an example in some  
of R's code which does something like this (any fortran call from R's C  
would probably be a sufficient guide). So far I can only find locations  
where R calls the Fortran directly, but this is not an option for me.

Also, I am trying to gauge the overhead of making this call, does anyone  
have any knowledge of whether there might be some non-trivial constant  
time penalty on making such a call.

Thanks in advance, Jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R-gui] R GUI considerations (was: R, Wine, and multi-threadedness)

2005-10-20 Thread James Wettenhall
hich the GUI currently cannot do, then the
statistician can do some command-line operations on their .RData file and
send them back some graphs to put into their PowerPoint slides or an
updated .RData file they can continue using in the GUI or whatever.  I've
certainly done this for some users and they have been very pleased with
the results.  For other users, I have actually added new buttons / menu
items to the GUI very quickly in response to their requests for features
not currently available.

> You want all the preprocessing done in a GUI? I don't see how
> that is possible in a way that makes sense. How do you tell the
> GUI what your raw data looks like? How do you tell it to prepare
> the data for processing by R? Does the user have to learn the
> GUI's own scripting language and filters?

No, the user is not expected to learn any scripting language.  I think it
would be very difficult to write a general GUI for all types of data
analysis in R.  But we are focusing on very specific classes of data, e.g.
gene expression microarray data, where most people are using some very
well-known, well-documented raw data formats.  Gradually the developer can
add support for more raw data formats to the GUI, but sometimes the
developer receives a request for supporting a data format which is so
obscure that it is not worth the GUI developer's time to implement it so
we can just say "Sorry, you'll have to collaborate with a statistician who
can do your analysis at the command-line, unless you want to learn the
command-line interface yourself."

> If you want users to be productive, you have to give them
> something they can easily incorporate within the tools they use
> on a daily basis.

I have no objection to software developers writing Visual Basic macros for
Excel or webservers with familiar-looking HTML forms if that's what they
want.  But in our case, we want them to use something closely related to
the tools that WE use on a daily basis (not just what THEY use), because
if they develop a need for a customization - something out of the
ordinary, it is going to be much easier for us to fulfill their special
request using a command-line analysis if they are using R (either directly
at the command-line interface or via a GUI), than if they are using an
expensive proprietary commercial system.  And just to clarify, I'm not
talking about things like Microsoft Office which they would have anyway,
I'm talking about scientists deciding whether to spend thousands of
dollars on commercial software for very specific types of data analysis
starting with well-documented raw data formats or whether to use our free
software instead.

> No big applications with everything locked in,
> but a set of programs or commands that do specific tasks, with
> an easy to understand input and output. You need something that
> works in an open environment, so the user can use existing
> skills. With a GUI that does "everything", the user has to learn
> from scratch all the things that make "everything" happen.

I certainly don't think talking about at GUI for R that does "everything"
is realistic, and I have never considered doing that.  I'm happy to admit
that my GUIs have limitations in terms of what the users can do by
point-and-click, but for the advanced user (or for a collaborating
statistician who can take .RData files saved from their GUI analysis),
there are ways of switching from GUI analysis to command-line analysis
when really necessary, so in a way, an R-GUI _can_ do "everything", but it
just can't do everything in a user-friendly way, and doesn't want to try
to do that, because that would mean a completely intimidating collection
of menus and buttons (which is how some people feel about Microsoft
Office).

We may have to agree to disagree about some things, but I hope this has
made my point of view a little clearer.

Best wishes,
James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] native logistic regression

2005-10-25 Thread James Bullard
Hello, thanks for the answers to my last questions and apologies for not 
seeing the answers staring me down in the manual. At the risk of asking 
another potentially obvious question ... we are currently using some of 
the NAG routines for model fitting and I am trying to re-write the 
relevant portions of the code to use R's built in utilities (dqrls).  
The next step is to implement logistic regression (iteratively 
reweighted least-squares logistic link fn). glm does the iteration in R 
and it is just not acceptable to call R's routine from C - my first 
inclination was just to rewrite the loop of R in C, but I thought I 
would ask first to see: 1) if there was a standard C or Fortran routine 
in R for doing iteratively reweighted least squares using the logistic 
link function, and 2) if there is no routine in the core libraries of R, 
do people have knowledge of R packages which have implemented this 
natively?

Thanks again for all of the help, jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] segfault following a detach

2005-12-09 Thread James Bullard
Hello, first off, thanks for all of the previous help; hopefully someone 
will have some insight on this question. I am attempting to track down a 
segmentation fault which occurs only after a detach(2) is called in the 
code (I have replaced the detach(2) with detach(package:DSA) and that 
fails as well (furthermore, I have removed the detach calls and it does 
not segfault)). It has proved difficult to track down (at least for me) 
because it does not happen when the call is made, detach returns and 
then some seconds (~ 30 seconds - 1 minute) later a segmentation fault 
occurrs. I have run it in the debugger and the backtrace is below. When 
I step through the code of do_detach it does not appear to be happening 
at any consistent location. I assume this means that some worker thread 
is involved, but the bactrace is not helpful (at least to me).

1.) Can I improve the backtrace message after the segfault to increase 
message potential.
2.) Can I set some breakpoints elsewhere which might be more instructive 
as I do not see much going on in do_detach? suggestions?

The library I am working with is in C and uses Nag, it uses the 
registration facilities, although I have the problem when I do not use 
the registration facilities. Specifically, I have defined the method: 
void R_init_DSA(DllInfo *info). However, as I said if I comment this out 
it appears to behave identically.

Also, I have run the whole test case using valgrind to see if I could 
track down the problem there (I assume I am trashing some of R's memory) 
however, the only messages I get from valgrind are below - all related 
to the registration code. It does not appear to seg fault when I run it 
in valgrind, but I have no idea why this would be the case as I am 
*very* new to valgrind.

I am a little out of my league here so any help would be greatly 
appreciated. OS and R version information is below. Thanks as always for 
all of the help.

thanks, jim

 > R.version
 
platform i686-pc-linux-gnu
arch i686
os   linux-gnu   
system   i686, linux-gnu 
status   
major2   
minor2.0 
year 2005
month10  
day  06  
svn rev  35749   
language R   


(gdb) backtrace
#0  0xb71655d0 in ?? ()
#1  0x0872fc70 in ?? ()
#2  0x0872fc58 in ?? ()
#3  0xb69b7ab8 in ?? ()
#4  0xb71654d5 in ?? ()
#5  0x in ?? ()
#6  0x in ?? ()
#7  0x4399ca09 in ?? ()
#8  0x in ?? ()
#9  0x in ?? ()
#10 0x in ?? ()
#11 0x0872fc18 in ?? ()
#12 0x08ee0fe0 in ?? ()
#13 0x in ?? ()
#14 0xb69c5c30 in __JCR_LIST__ () from /lib/tls/i686/cmov/libpthread.so.0
#15 0xb69b7b4c in ?? ()
#16 0xb69bcae0 in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
#17 0xb69bcae0 in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
#18 0xb7d09c9a in clone () from /lib/tls/i686/cmov/libc.so.6

---
- valgrind output, after detach(.) is called 
-
---
==20262== Conditional jump or move depends on uninitialised value(s)
==20262==at 0x1B92D888: R_getDLLRegisteredSymbol (Rdynload.c:665)
==20262==by 0x1B92D9C5: R_dlsym (Rdynload.c:735)
==20262==by 0x1B92D0BD: R_callDLLUnload (Rdynload.c:412)
==20262==by 0x1B92D15B: DeleteDLL (Rdynload.c:439)
==20262==
==20262== Conditional jump or move depends on uninitialised value(s)
==20262==at 0x1B92D8D2: R_getDLLRegisteredSymbol (Rdynload.c:681)
==20262==by 0x1B92D9C5: R_dlsym (Rdynload.c:735)
==20262==by 0x1B92D0BD: R_callDLLUnload (Rdynload.c:412)
==20262==by 0x1B92D15B: DeleteDLL (Rdynload.c:439)
==20262==
==20262== Conditional jump or move depends on uninitialised value(s)
==20262==at 0x1B92D8D7: R_getDLLRegisteredSymbol (Rdynload.c:681)
==20262==by 0x1B92D9C5: R_dlsym (Rdynload.c:735)
==20262==by 0x1B92D0BD: R_callDLLUnload (Rdynload.c:412)
==20262==by 0x1B92D15B: DeleteDLL (Rdynload.c:439)
==20262==
==20262== Conditional jump or move depends on uninitialised value(s)
==20262==at 0x1B92D8DB: R_getDLLRegisteredSymbol (Rdynload.c:696)
==20262==by 0x1B92D9C5: R_dlsym (Rdynload.c:735)
==20262==by 0x1B92D0BD: R_callDLLUnload (Rdynload.c:412)
==20262==by 0x1B92D15B: DeleteDLL (Rdynload.c:439)
==20262==
==20262== Conditional jump or move depends on uninitialised value(s)
==20262==at 0x1B92D8E0: R_getDLLRegisteredSymbol (Rdynload.c:696)
==20262==by 0x1B92D9C5: R_dlsym (Rdynload.c:735)
==20262==by 0x1B92D0BD: R_callDLLUnload (Rdynload.c:412)
==20262==by 0x1B92D15B: DeleteDLL (Rdynload.c:439)
==20262==
==20262== Conditional jump or move depends on uninitia

[Rd] Using .onUnload() to unload compiled code

2006-02-08 Thread James MacDonald
If one wants to unload compiled code for a package containing a namespace, my 
understanding is that .onUnload() should be used, with a call to 
library.dynam.unload(). This is used in e.g., the stats and methods packages, 
but it appears to me that the compiled code is not being unloaded when the 
package is detached(). Am I misunderstanding something?

Best,

Jim

> search()
[1] ".GlobalEnv""package:methods"   "package:stats" 
"package:graphics" 
[5] "package:grDevices" "package:utils" "package:datasets"  "Autoloads" 
   
[9] "package:base" 

> stats:::.onUnload
function (libpath) 
library.dynam.unload("stats", libpath)


> getLoadedDLLs()
   Filename Dynamic.Lookup
base   base  FALSE
iconvC:/rw2030dev/modules/iconv.dll   TRUE
grDevices C:/rw2030dev/library/grDevices/libs/grDevices.dll  FALSE
stats C:/rw2030dev/library/stats/libs/stats.dll  FALSE
methods   C:/rw2030dev/library/methods/libs/methods.dll  FALSE

> detach(3)

> search()
[1] ".GlobalEnv""package:methods"   "package:graphics"  
"package:grDevices"
[5] "package:utils" "package:datasets"  "Autoloads" "package:base"  
   
> getLoadedDLLs()
   Filename Dynamic.Lookup
base   base  FALSE
iconvC:/rw2030dev/modules/iconv.dll   TRUE
grDevices C:/rw2030dev/library/grDevices/libs/grDevices.dll  FALSE
stats C:/rw2030dev/library/stats/libs/stats.dll  FALSE
methods   C:/rw2030dev/library/methods/libs/methods.dll  FALSE

> R.version
   _ 
platform   i386-pc-mingw32   
arch   i386  
os mingw32   
system i386, mingw32 
status Under development (unstable)  
major  2 
minor  3.0   
year   2006  
month  01
day01
svn rev36947     
language   R 
version.string Version 2.3.0 Under development (unstable) (2006-01-01 r36947)


James W. MacDonald
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623



**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] How do you make a formal "feature" request?

2010-08-21 Thread James Bullard
I use the summary function. Being unfamiliar with the SAS report function, it 
is difficult to answer more completely.

jim

On Aug 21, 2010, at 8:41 AM, Donald Winston wrote:

> Who decides what features are in R and how they are implemented? If there is 
> someone here who has that authority I have this request: 
> 
> A report() function analogous to the plot() function that makes it easy to 
> generate a report from a table of data. This should not be in some auxiliary 
> package, but part of the core R just like plot(). As a long time SAS user I 
> cannot believe R does not have this. Please don't give me any crap about 
> Sweave, LaTex, and the "power" of R to roll your own. You don't have to "roll 
> your own" plot do you? Reports are no different. If you don't agree do not 
> bother me. If you agree then please bring this request to the appropriate 
> authorities for consideration or tell me how to do it.
> 
> Thanks.
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Speeding up matrix multiplies

2010-08-27 Thread James Cloos
I just tried out the effects of using R's NA value¹ with C arithmetic
on an amd64 linux box.

I always got a NAN result for which R's R_IsNA() would return true.

At least on this platform, NAN's propagate w/o a change in their
lower 32 bits.

If NA is supposed to propagate the way NaN is spec'ed to in IEEE754,
then on some platforms it should be OK to skip the NA/NaN checks and
let the optimised routines handle all of the matrix multiplies.

A configure test could be used to determine whether the current
platform is one where NA/NaN checks are required.

-JimC

1] given:
 union rd { double d; uint64_t l; }; union rd u;
   then:
 u.d is an R NA iff both of:
 u.l & 0x7ff0 == 0x7ff0
 u.l & 0x == 1954;
 are true.
-- 
James Cloos  OpenPGP: 1024D/ED7DAEA6

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R project testers - how to help out

2011-02-10 Thread James Goss


I just want to find out how I could help out with testing on the R  
project.  I have
many years of software development as well as QA and test of  
everything from
chips to supercomputers and really enjoy testing.  Increasingly, I am  
working in
data mining and large-scale statistical kinds of things and "R" is my  
tool of choice.


Any help is much appreciated.

James Goss
Los Angeles, CA

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 plus

2011-04-14 Thread James Perkins
Dear all,

I set a generic s4 funciton, then add a method.

When I do this, I notice that when pressing tab, the arguments to this
method do not appear like they do for generic functions, instead I get an
elipsis.

i.e.

> setClass("batchFile")
[1] "batchFile"
>
> setGeneric("Gen1", function(batchFile, ...) standardGeneric("Gen1"))
[1] "Gen1"
>
> setMethod("Gen1", signature = "batchFile", definition =
+ function(batchFile, xxx, yyy, zzz) { return(batchFile) }
+ )
[1] "Gen1"
> Gen1(
...=batchFile=


Note that xx, yy and zzz are not displayed when pressing  after Gen1(

contrast that to

> Gen1 <- function(batchFile, xxx, yyy, zzz) { return(batchFile) }
> Gen1(
batchFile=  xxx=yyy=zzz=


Is there a way to allow  to show the extra options? Or has this been
deliberately disabled in order to allow someone to set a number of different
methods with method specific arguments with a single generic?

Many thanks,

Jim
-- 
James Perkins, PhD student
Institute of Structural and Molecular Biology
Division of Biosciences
University College London
Gower Steet
London, WC1E 6BT
UK

email: jperk...@biochem.ucl.ac.uk
phone: 0207 679 2198

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] self-referential representations in S4

2011-04-19 Thread James Bullard

I'm trying to do the following:

> setClass("MyNode", representation(parent = "MyNode"))
[1] "MyNode"
Warning message:
undefined slot classes in definition of "MyNode": parent(class "MyNode")

I scanned the docs, but found nothing. The representation function has no
problem, it's the setClass function which gives the warning.

What I'm trying to understand is why have the warning - it seems to work
just fine when I instantiate the class. Can we add an argument to the
setClass to suppress the warning?

This question was asked previously, but not answered in any satisfactory way:

http://r.789695.n4.nabble.com/Linked-List-in-R-td3303021.html

thanks, jim




R version 2.12.2 Patched (2011-03-09 r54717)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8   LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] h5r_1.1

loaded via a namespace (and not attached):
[1] tools_2.12.2

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] External pointers and an apparent memory leak

2011-09-15 Thread James Bullard
I'm using external pointers and seemingly leaking memory. My determination of a 
memory leak is that the R process continually creeps up in memory as seen by 
top while the usage as reported by gc() stays flat. I have isolated the C code:

void h5R_allocate_finalizer(SEXP eptr) {
Rprintf("Calling the finalizer\n");
void* vector = R_ExternalPtrAddr(eptr);
free(vector);
R_ClearExternalPtr(eptr);
}

SEXP h5R_allocate(SEXP size) {
int i = INTEGER(size)[0];
char* vector = (char*) malloc(i*sizeof(char));
SEXP e_ptr = R_MakeExternalPtr(vector, R_NilValue, R_NilValue);
R_RegisterCFinalizerEx(e_ptr, h5R_allocate_finalizer, TRUE);
return e_ptr;
}


If I run an R program like this:

v <- replicate(10, {
  .Call("h5R_allocate", as.integer(100))
})
rm(v)
gc()

Then you can see the problem (top reports that R still has a bunch of memory, 
but R doesn't think it does). I have tried using valgrind and it says I have 
memory left on the table at the end lest you think it is because top. Also, I 
have tried Free/Calloc as well and this doesn't make a difference. Finally, I 
see this in both R-2-12 (patched) and R-2-13 - I think it is more an 
understanding issue on my part.

thanks much in advance, to me it really resembles the connection.c code, but 
what am I missing?

thanks, jim


[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] External pointers and an apparent memory leak

2011-09-15 Thread James Bullard
Hi Simon, Matt

First, thank you for the help. My memory is still growing and it is clear that 
I'm removing the things I am allocating - potentially it is just Linux not 
giving back the memory until another process needs it, but it definitely 
doesn't behave that way when I allocate directly within R. To be a better 
poster:

#> system("uname -a")
#Linux mp-f020.nanofluidics.com 2.6.32-30-server #59-Ubuntu SMP Tue Mar 1 
22:46:09 UTC 2011 x86_64 #GNU/Linux

#> sessionInfo()
#R version 2.13.1 Patched (2011-09-13 r57007)
#Platform: x86_64-unknown-linux-gnu (64-bit)

Here, if you look at the successive allocations you'll see that by the end I 
have started to grow my memory and, at least w.r.t. the ps method of memory 
profiling, I'm leaking memory.

> showPS <- function() system(paste('ps -eo pid,vsz,%mem | grep', Sys.getpid()))
> gcl <- function() { lapply(1:10, gc, verbose = F)[[10]] }
> 
> showPS()
18937 147828  0.1
> m <- .Call("h5R_allocate_gig")
> rm(m)
> gcl()
 used (Mb) gc trigger (Mb) max used (Mb)
Ncells 213919 11.5 407500 21.8   213919 11.5
Vcells 168725  1.3 786432  6.0   168725  1.3
> showPS()
18937 147828  0.1
> 
> m <- sapply(1:1000, function(a) {
+   .Call("h5R_allocate_meg")
+ })
> rm(m)
> gcl()
 used (Mb) gc trigger (Mb) max used (Mb)
Ncells 213920 11.5 467875   25   213920 11.5
Vcells 168725  1.3 7864326   168725  1.3
> showPS()
18937 147828  0.1
> 
> m <- sapply(1:10, function(a) {
+   .Call("h5R_allocate_k")
+ })
> rm(m)
> gcl()
 used (Mb) gc trigger (Mb) max used (Mb)
Ncells 213920 11.5 818163 43.7   213920 11.5
Vcells 168725  1.3 895968  6.9   168725  1.3
> showPS()
18937 271860  0.9
> 
> m <- sapply(1:100, function(a) {
+   .Call("h5R_allocate_k")
+ })
> rm(m)
> gcl()
 used (Mb) gc trigger (Mb) max used (Mb)
Ncells 213920 11.5 785114 42.0   213920 11.5
Vcells 168725  1.31582479 12.1   168725  1.3
> showPS()
18937 1409568  7.8

I have redone the examples to better demonstrate the issue I am seeing. Below 
is the C code:

#include 
#include 
#include 
void h5R_allocate_finalizer(SEXP eptr) {
char* vector = R_ExternalPtrAddr(eptr);
Free(vector);
R_ClearExternalPtr(eptr);
}
SEXP h5R_allocate_meg() {
char* vector = (char*) Calloc(1048576, char);
for (int j = 0; j < 1048576; j++) {
vector[j] = 'c';
}
SEXP e_ptr = R_MakeExternalPtr(vector, R_NilValue, R_NilValue); 
PROTECT(e_ptr);
R_RegisterCFinalizerEx(e_ptr, h5R_allocate_finalizer, TRUE);
UNPROTECT(1);
return e_ptr;
}
SEXP h5R_allocate_k() {
char* vector = (char*) Calloc(1024, char);
for (int j = 0; j < 1024; j++) {
vector[j] = 'c';
}
SEXP e_ptr = R_MakeExternalPtr(vector, R_NilValue, R_NilValue); 
PROTECT(e_ptr);
R_RegisterCFinalizerEx(e_ptr, h5R_allocate_finalizer, TRUE);
UNPROTECT(1);
return e_ptr;
}
SEXP h5R_allocate_gig() {
char* vector = (char*) Calloc(1073741824, char);
for (int j = 0; j < 1073741824; j++) {
vector[j] = 'c';
}
SEXP e_ptr = R_MakeExternalPtr(vector, R_NilValue, R_NilValue); 
PROTECT(e_ptr);
R_RegisterCFinalizerEx(e_ptr, h5R_allocate_finalizer, TRUE);
UNPROTECT(1);
return e_ptr;
}


Finally, when I use valgrind on the test script, I see:

==22098== 135,792 bytes in 69 blocks are possibly lost in loss record 1,832 of 
1,858
==22098==at 0x4C274A8: malloc (vg_replace_malloc.c:236)
==22098==by 0x4F5D799: GetNewPage (memory.c:786)
==22098==by 0x4F5EE6F: Rf_allocVector (memory.c:2330)
==22098==by 0x4F6007F: R_MakeWeakRefC (memory.c:1198)
==22098==by 0xE01BACF: h5R_allocate_k (h5_debug.c:33)
==22098==by 0x4EE17E4: do_dotcall (dotcode.c:837)
==22098==by 0x4F18D02: Rf_eval (eval.c:508)
==22098==by 0x4F1A7FD: do_begin (eval.c:1420)
==22098==by 0x4F18B1A: Rf_eval (eval.c:482)
==22098==by 0x4F1B7FC: Rf_applyClosure (eval.c:838)
==22098==by 0x4F189F7: Rf_eval (eval.c:526)
==22098==by 0x4E6F3D8: do_lapply (apply.c:72)

Thanks for any help!

jim

From: Simon Urbanek [simon.urba...@r-project.org]
Sent: Thursday, September 15, 2011 8:35 AM
To: James Bullard
Cc: r-devel@r-project.org
Subject: Re: [Rd] External pointers and an apparent memory leak

Jim,

On Sep 14, 2011, at 5:21 PM, James Bullard wrote:

> I'm using external pointers and seemingly leaking memory. My determination of 
> a memory leak is that the R process continually creeps up in memory as seen 
> by top while the usage as reported by gc() stays flat. I have isolated the C 
> code:
>
> void h5R_allocate_finalizer(SEXP eptr) {
>Rprintf("Calling the finalizer\n");
>void* vector = R_ExternalPtrAddr(eptr);
>free(vect

[Rd] bug in nlme::getVarCov

2016-08-11 Thread James Pustejovsky
Greetings,

I noticed a bug in the getVarCov function from nlme. I am posting here
because it is not currently possible to register and file a report through
https://bugs.r-project.org/. (If this is not the appropriate venue for
this, I'd be grateful if someone could point me to the right place.)

The issue can be seen by observing that getVarCov is sensitive to the order
in which the data are sorted, as demonstrated in the following:

library(nlme)
data(Ovary)

gls_raw <- gls(follicles ~ sin(2*pi*Time) + cos(2*pi*Time), data = Ovary,
   correlation = corAR1(form = ~ 1 | Mare),
   weights = varPower())
Mares <- levels(gls_raw$groups)
V_raw <- lapply(Mares, function(g) getVarCov(gls_raw, individual = g))

Ovary_sorted <- Ovary[with(Ovary, order(Mare, Time)),]
gls_sorted <- update(gls_raw, data = Ovary_sorted)
V_sorted <- lapply(Mares, function(g) getVarCov(gls_sorted, individual = g))
all.equal(gls_raw$modelStruct, gls_sorted$modelStruct)
all.equal(V_raw, V_sorted)

See here for more details and a simple patch:
http://jepusto.github.io//Bug-in-nlme-getVarCov
Or here for just the R code:
https://gist.github.com/jepusto/5477dbe3efa992a3d42c2073ccb12ce4

James

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] CC on Bug 16932?

2016-08-22 Thread James Hiebert

Hi,

I'm being affected by Bug 16932 
(https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=16932), and would 
like to comment on it and/or be CC'ed on it.


Unfortunately I can't create a Bugzilla account... when I try, it tells 
me "The e-mail address you entered (hieb...@uvic.ca) didn't pass our 
syntax checking for a legal email address. New accounts are disabled. 
Please post bug reports to R-devel@r-project.org; if they are 
reasonable, we will whitelist you. It also must not contain any illegal 
characters."


Could I please be whitelisted? Many thanks.

~James Hiebert

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] allocation error and high CPU usage from kworker and migration: memory fragmentation?

2014-03-15 Thread James Sams

Hi,

I'm new to this list (and R), but my impression is that this question is 
more appropriate here than R-help. I hope that is right.


I'm having several issues with the performance of an R script. 
Occasionally it crashes with the well-known 'Error: cannot allocate 
vector of size X' (this past time it was 4.8 Gb). When it doesn't crash, 
CPU usage frequently drops quite low (often to 0) with high migration/X 
usage. Adding the 'last CPU used' field to top indicates that the R 
process is hopping from core to core quite frequently. Using taskset to 
set an affinity to one core results in CPU usage more typically in the 
40-60% range with no migration/X usage. But the core starts sharing time 
with a kworker task. renice'ing doesn't seem to change anything. If I 
had to guess, I would think that the kworker task is from R trying to 
re-arrange things in memory to make space for my large objects.


2 machines:
  - 128 and 256 GiB RAM,
  - dual processor Xeons (16 cores + hyperthreading, 32 total 'cores'),
  - Ubuntu 13.10 and 13.04 (both 64 bit),
  - R 3.0.2,
  - data.table 1.8.11 (svn r1129).*

Data: We have main fact tables stored in about 1000 R data files that 
range up to 3 GiB in size on disk; so up to like 50 GiB in RAM.


Questions:
  - Why is R skipping around cores so much? I've never seen that happen 
before with other R scripts or with other statistical software. Is it 
something I'm doing?
  - When I set the affinity of R to one core, why is there so much 
kworker activity? It seems obvious that it is the R script generating 
this kworker activity on the same core. I'm guessing this is R trying to 
recover from memory fragmentation?
  - I suspect a lot of my problem is from the merges. If I did that in 
one line, would this help at all?
move <- merge(merge(move, upc, by=c('upc')), parent, by=c('store', 
'year'))

* other strategies to improve merge performance?
  - If this is a memory fragmentation issue, is there a way to get 
lapply to allocate not just pointers to the data.tables that will be 
allocated, but to (over)allocate the data.tables themselves. The final 
list should be about 1000 data.tables long with each data.table no 
larger than 6000x4.


I've used data.table in a similar strategy to build lists like this 
before without issue from the same data. I'm not sure what is different 
about this code compared to my other code. Perhaps the merging?


The gist of the R code is pretty basic (modified for simplicity). The 
action is all happening in the reduction_function and lapply. I keep 
reassigning to move to try to indicate to R that it can gc the previous 
object referenced by move.


library(data.table)
library(lubridate)
# imports several data.tables, total 730 MiB
load(UPC) # provides PL_flag data.table
load(STORES) # and parent data.table
timevar = 'month'
by=c('retailer', 'month')
save.dir='/tmp/R_cache'
each.parent <- rbindlist(lapply(sort(list.files(MOVEMENT, full.names=T),
reduction_function, upc=PL_flag,
parent=parent, timevar=timevar, by=by))

reduction_function <- function(filename, upc, parent, timevar, by, 
save.dir=NA) {
load(filename) # imports move a potentially large data.table 
(memory size 10 MiB-50 GiB)

move[, c(timevar, 'year') := list(floor_date(week_end, unit=timevar),
  year(week_end))]
move <- merge(move, upc, by=c('upc')) # adds is_PL column, a boolean
move <- merge(move, parent, by=c('store', 'year') # adds parent 
column, an integer

setkeyv(move, by)
# this reduces move to a data.table with at most 6000 rows, but 
always 4 columns
move <- move[, list(revenue=sum(price*units), 
revenue_PL=sum(price*units*is_PL)),

   keyby=by]
move[, category := gsub(search, replace, filename)]
return(move)
}

--
James Sams
sams.ja...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Question about fifo behavior on Linux between versions 3.0.3 and 3.1.0

2014-05-20 Thread James Smith
Version 3.1.0 of R has imposed a very small data limit on writing to fifos on 
Linux. Consider the following R code (Assumes that "ff" is a fifo in the R 
process's current directory):

con <- fifo("ff", "a+b")
writeBin(raw(12501), con)

In R 3.0.3, this returns without error and the data is available on the fifo. 
In R 3.1.0, however, this returns the following error:

Error in writeBin(raw(12501), con) : too large a block specified

In investigating R's source, the difference seems to be in 
src/main/connections.c, in the function fifo_write() (around line 932). In R 
3.0.3, fifo_write() has these lines:

if ((double) size * (double) nitems > SSIZE_MAX)
error(_("too large a block specified"));

R 3.1.0 has these lines changed to this:

if ((size * sizeof(wchar_t) * nitems) > 5) {
  error(_("too large a block specified"));
}

The change effectively places a limit of 12500 bytes on writes (since 
sizeof(wchar_t) == 4). Does anyone know why this change was made? I understand 
that fifos on Windows were implemented for R 3.1.0, but the code for fifos on 
Windows is in a separate part of connections.c that doesn't get compiled on 
Linux (i.e., the code given is Unix only). I also couldn't find any references 
to fifo behavior changes under Linux in any of R's documentation.

My platform is Fedora 20 (64-bit) and I have built and installed R from source.

Thank you for your time and consideration.

James O Smith
Harmonia Holdings Group, LLC

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Development version of R: Improved nchar(), nzchar() but changed API

2015-04-25 Thread James Cloos
>>>>> "GC" == Gábor Csárdi  writes:

GC> You can get an RSS/Atom feed, however, if that's good:
GC> https://github.com/wch/r-source/commits/master.atom

That is available in gwene/gmane as:

 gwene.com.github.wch.r-source.commits.trunk

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Initializing a large data structure to be accessed strictly within a shared C library

2011-12-27 Thread James Muller
Dear R-devel members,

The question:  Is it possible to initialize and later free a large data
structure strictly within a shared C library, to be used by a function in
the C library that I'll call from R--without ever having to pass data to
and from R? This is analogous to C++ object initialization/use/destruction,
but if possible I'd like to stay in C.

The context: I'm implementing a particle swarm optimization of a
60-dimension nonlinear transform, where the transform is defined in a
half-gigabyte dataset. By carefully initializing a C struct I can trim a
large amount of work from the PSO iteration stage. This is, of course,
straight forward if I implement the whole thing in a self-contained C
program--however, I'd like R to handle the optimization routines, and my
shared library to implement the value function.

So: what do folks think?

Cheers,

James

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Initializing a large data structure to be accessed strictly within a shared C library

2011-12-27 Thread James Muller
Dear R-help members,

*(My apologies for cross-posting to both R-help and R-devel -- this
question straddles both domains...)*

The question:  Is it possible to initialize and later free a large data
structure strictly within a shared C library, to be used by a function in
the C library that I'll call from R--without ever having to pass data to
and from R? This is analogous to C++ object initialization/use/destruction,
but if possible I'd like to stay in C.

The context: I'm implementing a particle swarm optimization of a
60-dimension nonlinear transform, where the transform is defined in a
half-gigabyte dataset. By carefully initializing a C struct I can trim a
large amount of work from the PSO iteration stage. This is, of course,
straight forward if I implement the whole thing in a self-contained C
program--however, I'd like R to handle the optimization routines, and my
shared library to implement the value function.

So: what do folks think?

Cheers,

James

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Initializing a large data structure to be accessed strictly within a shared C library

2011-12-27 Thread James Muller
Thanks Duncan, you've unclogged my thinking. For anybody interested,
see below a sketch of the solution.

Cheers,

James


--START SKETCH OF SOLUTION--

#include 
#include 

static typedef struct {
int nrow, ncol;
double *data;
} _myparticle_data_struct;
static _myparticle_data_struct myparticle_data;

void myparticle_init() {
// Before we begin, call this from .Call() to Ralloc() memory and load the
// data into to myparticle_data.data
}

void myparticle_free() {
// When we're done with the PSO, .Call() to Free() the data
}

void myparticle_eval(double *value, double *x) {
// .Call() to evaluate the value *value given vector x[]
}

--END SKETCH OF SOLUTION--

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Bordered legend icons and Text in plots

2013-02-28 Thread James Hawley
Hello,

My colleagues and I use lattice for a variety of different purposes, and we 
came across a couple issues involving legend.R and update.trellis.R:
1. When using xyplot, the shapes in the plots are able to have borders and fill 
colours, but not so in the legend. I created a short script to isolate the 
problem (see legend-icons.R and Nofill.png).
   After some tracing through the source code, it turned out that the "fill" 
argument in "key" was being dropped by the line
   pars <- pars[!sapply(pars, is.null)] # remove NULL components (lines 302 
and 322)
   As in key$fill was NULL when passed. The issue seems to be from the function 
process.key(). The output list does not return "fill" as one of its arguments, 
and was dropped whenever "key" was processed.
2. Giving multiple grobs to the 'inside' part of 'legend' only uses the first 
grob if you use update.trellis to provide them. This is caused by the way 
modifyList handles list elements with the same name (which is called by 
update.trellis), and can be worked around (see update.trellis.R).
   The issue without modification to your code can be seen in Notext.png 
(created by legend-text.R). Multiple grobs could still be given to 'inside' via 
the other methods (e.g. xyplot and the like).

I've made a few modifications to the source code and uploaded them here as well 
(legend.R and update.trellis.R).
For Issue 1, I added the line "fill = fill," to the output list of 
process.key(), and this seems to have fixed the issue (see legend.R line 216 
and Fill.png for results).
For Issue 2 there is a workaround in update.trellis.R lines 267-275 (see 
Text.png for the results).

James Hawley
Student

Ontario Institute for Cancer Research
MaRS Centre, South Tower
101 College Street, Suite 800
Toronto, Ontario, Canada M5G 0A3

Toll-free: 1-866-678-6427
Twitter: @OICR_news
www.oicr.on.ca

This message and any attachments may contain confidential and/or privileged 
information for the sole use of the intended recipient. Any review or 
distribution by anyone other than the person for whom it was originally 
intended is strictly prohibited. If you have received this message in error, 
please contact the sender and delete all copies. Opinions, conclusions or other 
information contained in this message may not be that of the organization.
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] linking hdf5, requires setting LD_LIBRARY_PATH

2010-03-02 Thread James Bullard





On Mar 2, 2010, at 17:45, Dirk Eddelbuettel  wrote:



Unless I am missing something, this has nothing to do with hdf5 per  
se. See

below.

No, you are not missing anything. Thank you for the response.  This is  
exactly what I was looking for.



Thanks again, Jim




On 2 March 2010 at 16:20, bull...@stat.berkeley.edu wrote:
| I am writing an R package to interface to hdf5 files (yes I know one
| exists, the issue will be demonstrated using that package). I tend  
to like
| to build/install things locally so that aspects of the system I am  
working
| on don't cause problems later when attempting to install  
elsewhere. To

| this end, I build and install the hdf5 libraries w/out incident:
|
| tar xzf hdf5-1.8.4.tar.bz2
| cd hdf5-1.8.4
| ./configure --prefix=/home/jbullard/scratch/hdf5_install
| make && make install
|
| Now, I make a shared object using the following (I am compiling  
hdf5.c

| directly in src of the hdf5 package):
|
| gcc -I/home/jbullard/projects/me/R-builder/lib64/R/include
| -I/home/jbullard/scratch/hdf5_install/include -I/usr/local/include  
-fpic

| -g -O2 -std=gnu99 -c hdf5.c -o hdf5.o
| gcc -shared -L/usr/local/lib64 -o hdf5.so hdf5.o
| -L/home/jbullard/scratch/hdf5_install/lib -lhdf5 -lz -lm
| -L/home/jbullard/projects/me/R-builder/lib64/R/lib -lR
|
| I then start R and get the following error:
|
| Error in dyn.load("hdf5.so") :
|   unable to load shared library '/home/jbullard/scratch/hdf5/src/ 
hdf5.so':
|   libhdf5.so.6: cannot open shared object file: No such file or  
directory

|
| The solution is set LD_LIBRARY_PATH to
| /home/jbullard/scratch/hdf5_install/lib

Of course. This is a dynamic library, and ld.so needs to find it.

So if and when _you_ opt to depart from using standard locations,  
_you_ need

to tell ld.so where to look.

Setting LD_LIBRARY_PATH is one of several ways to do so. Others  
include


 a) the /etc/ld.so.conf file,

 b) (on newer linuxen) a file in the /etc/ld.so.conf.d/ directory

 c) encode the path at link time using the rpath argument.

Rcpp and RInside provide examples of the last approach.

Hth, Dirk

|
| Then things work just fine. However, I dislike this option -- Is  
there any
| other way which things can be orchestrated completely at package  
install
| time? I don't want to go editing any files in the R installation;  
more

| like an option to append something to R's LD_LIBRARY_PATH w/in the
| Makevars.in of the package being installed (in this case, hdf5)?
|
| Details of platform below. As always, thanks in advance.
|
| jim
|
| R version 2.11.0 Under development (unstable) (2010-02-08 r51110)
| x86_64-unknown-linux-gnu
|
| locale:
|  [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
|  [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
|  [5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8
|  [7] LC_PAPER=en_US.UTF-8   LC_NAME=C
|  [9] LC_ADDRESS=C   LC_TELEPHONE=C
| [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
|
| attached base packages:
| [1] stats graphics  grDevices utils datasets  methods   base
|
| HDF5 Version 1.8.4
| R hdf5 package: hdf5_1.6.9.tar.gz
| Linux mp-1246 2.6.31-19-generic #56-Ubuntu SMP Thu Jan 28 02:39:34  
UTC

| 2010 x86_64 GNU/Linux
|
| __
| R-devel@r-project.org mailing list
| https://stat.ethz.ch/mailman/listinfo/r-devel

--
 Registration is open for the 2nd International conference R /  
Finance 2010
 See http://www.RinFinance.com for details, and see you in Chicago  
in April!


__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] application to mentor syrfr package development for Google Summer of Code 2010

2010-03-07 Thread James Salsman
Per http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010
-- and http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010:syrfr
-- I am applying to mentor the "Symbolic Regression for R" (syrfr)
package for the Google Summer of Code 2010.

I propose the following test which an applicant would have to pass in
order to qualify for the topic:

1. Describe each of the following terms as they relate to statistical
regression: categorical, periodic, modular, continuous, bimodal,
log-normal, logistic, Gompertz, and nonlinear.

2. Explain which parts of http://bit.ly/tablecurve were adopted in
SigmaPlot and which weren't.

3. Use the 'outliers' package to improve a regression fit maintaining
the correct extrapolation confidence intervals as are between those
with and without outlier exclusions in proportion to the confidence
that the outliers were reasonably excluded.  (Show your R transcript.)

4. Explain the relationship between degrees of freedom and correlated
independent variables.

Best regards,

James Salsman
jsals...@talknicer.com
http://talknicer.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] application to mentor syrfr package development for Google Summer of Code 2010

2010-03-07 Thread James Salsman
Chillu,

If I understand your concern, you want to lay the foundation for
derivatives so that you can implement the search strategies described
in Schmidt and Lipson (2010) --
http://www.springerlink.com/content/l79v2183725413w0/ -- is that
right? It is not clear to me how well this generalized approach will
work in practice, but there is no reason not to proceed in parallel to
establish a framework under which you could implement the metrics
proposed by Schmidt and Lipson in the contemplated syrfr package.

I have expanded the test I proposed with two more questions -- at
http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010:syrfr
-- specifically:

5. Critique http://sites.google.com/site/gptips4matlab/

6. Use anova to compare the goodness-of-fit of a SSfpl nls fit with a
linear model of your choice. How can your characterize the
degree-of-freedom-adjusted goodness of fit of nonlinear models?

I believe pairwise anova.nls is the optimal comparison for nonlinear
models, but there are several good choices for approximations,
including the residual standard error, which I believe can be adjusted
for degrees of freedom, as can the F statistic which TableCurve uses;
see: http://en.wikipedia.org/wiki/F-test#Regression_problems

Best regards,
James Salsman


On Sun, Mar 7, 2010 at 7:35 PM, Chidambaram Annamalai
 wrote:
> It's been a while since I proposed syrfr and I have been constantly in
> contact with the many people in the R community and I wasn't able to find a
> mentor for the project. I later got interested in the Automatic
> Differentiation proposal (adinr) and, on consulting with a few others within
> the R community, I mailed John Nash (who proposed adinr in the first place)
> if he'd be willing to take me up on the project. I got a positive reply only
> a few hours ago and it was my mistake to have not removed the syrfr proposal
> in time from the wiki, as being listed under proposals looking for mentors.
>
> While I appreciate your interest in the syrfr proposal I am afraid my
> allegiances have shifted towards the adinr proposal, as I got convinced that
> it might interest a larger group of people and it has wider scope in
> general.
>
> I apologize for having caused this trouble.
>
> Best Regards,
> Chillu
>
> On Mon, Mar 8, 2010 at 6:41 AM, James Salsman 
> wrote:
>>
>> Per http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010
>> -- and
>> http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010:syrfr
>> -- I am applying to mentor the "Symbolic Regression for R" (syrfr)
>> package for the Google Summer of Code 2010.
>>
>> I propose the following test which an applicant would have to pass in
>> order to qualify for the topic:
>>
>> 1. Describe each of the following terms as they relate to statistical
>> regression: categorical, periodic, modular, continuous, bimodal,
>> log-normal, logistic, Gompertz, and nonlinear.
>>
>> 2. Explain which parts of http://bit.ly/tablecurve were adopted in
>> SigmaPlot and which weren't.
>>
>> 3. Use the 'outliers' package to improve a regression fit maintaining
>> the correct extrapolation confidence intervals as are between those
>> with and without outlier exclusions in proportion to the confidence
>> that the outliers were reasonably excluded.  (Show your R transcript.)
>>
>> 4. Explain the relationship between degrees of freedom and correlated
>> independent variables.
>>
>> Best regards,
>>
>> James Salsman
>> jsals...@talknicer.com
>> http://talknicer.com
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] application to mentor syrfr package development for Google Summer of Code 2010

2010-03-07 Thread James Salsman
Chillu, I meant that development on both a syrfr R package capable of
using either F statistics or parametric derivatives should proceed in
parallel with your work on such a derivatives package. You are right
that genetic algorithm search (and general best-first search --
http://en.wikipedia.org/wiki/Best-first_search -- of which genetic
algorithms are various special cases) can be very effectively
parallelized, too.

In any case, thank you for pointing out Eureqa --
http://ccsl.mae.cornell.edu/eureqa -- but I can see no evidence there
or in the user manual or user forums that Eureqa is considering
degrees of freedom in its goodness-of-fit estimation.  That is a
serious problem which will typically result in invalid symbolic
regression.  I am sending this message also to Michael Schmidt so that
he might be able to comment on the extent to which Eureqa adjusts for
degrees of freedom in his fit evaluations.

Best regards,
James Salsman

On Sun, Mar 7, 2010 at 10:39 PM, Chidambaram Annamalai
 wrote:
>
>> If I understand your concern, you want to lay the foundation for
>> derivatives so that you can implement the search strategies described
>> in Schmidt and Lipson (2010) --
>> http://www.springerlink.com/content/l79v2183725413w0/ -- is that
>> right?
>
> Yes. Basically traditional "naive" error estimators or fitness functions
> fail miserably when used in SR with implicit equations because they
> immediately close in on "best" fits like f(x) = x - x and other trivial
> solutions. In such cases no amount of regularization and complexity
> penalizing methods will help since x - x is fairly simple by most measures
> of complexity and it does have zero error. So the paper outlines such
> problems associated with "direct" error estimators and thus they infer the
> "triviality" of the fit by probing its estimates around nearby points and
> seeing if it does follow the pattern dictated by the data points -- ergo
> derivatives.
>
> Also, somewhat like a side benefit, this method also enables us to perform
> regression on closed loops and other implicit equations since the fitness
> functions are based only on derivatives. The specific form of the error is
> equation 1.2 which is what, I believe, comprises of the internals of the
> evaluation procedure used in Eureqa.
>
> You are correct in pointing out that there is no reason to not work in
> parallel, since GAs generally have a more or less fixed form
> (evaluate-reproduce cycle) which is quite easily parallelized. I have used
> OpenMP in the past, in which it is fairly trivial to parallelize well formed
> for loops.
>
> Chillu
>
>> It is not clear to me how well this generalized approach will
>> work in practice, but there is no reason not to proceed in parallel to
>> establish a framework under which you could implement the metrics
>> proposed by Schmidt and Lipson in the contemplated syrfr package.
>>
>> I have expanded the test I proposed with two more questions -- at
>> http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010:syrfr
>> -- specifically:
>>
>> 5. Critique http://sites.google.com/site/gptips4matlab/
>>
>> 6. Use anova to compare the goodness-of-fit of a SSfpl nls fit with a
>> linear model of your choice. How can your characterize the
>> degree-of-freedom-adjusted goodness of fit of nonlinear models?
>>
>> I believe pairwise anova.nls is the optimal comparison for nonlinear
>> models, but there are several good choices for approximations,
>> including the residual standard error, which I believe can be adjusted
>> for degrees of freedom, as can the F statistic which TableCurve uses;
>> see: http://en.wikipedia.org/wiki/F-test#Regression_problems
>>
>> Best regards,
>> James Salsman
>>
>>
>> On Sun, Mar 7, 2010 at 7:35 PM, Chidambaram Annamalai
>>  wrote:
>> > It's been a while since I proposed syrfr and I have been constantly in
>> > contact with the many people in the R community and I wasn't able to
>> > find a
>> > mentor for the project. I later got interested in the Automatic
>> > Differentiation proposal (adinr) and, on consulting with a few others
>> > within
>> > the R community, I mailed John Nash (who proposed adinr in the first
>> > place)
>> > if he'd be willing to take me up on the project. I got a positive reply
>> > only
>> > a few hours ago and it was my mistake to have not removed the syrfr
>> > proposal
>> > in time from the wiki, as being listed under proposals looking for
>> > mentors.
>> >
>> > While I appreciate your interest in the syrf

Re: [Rd] application to mentor syrfr package development for Google Summer of Code 2010

2010-03-08 Thread James Salsman
Michael,

Thanks for your reply:

On Mon, Mar 8, 2010 at 12:41 AM, Michael Schmidt  wrote:
>
> Thanks for contacting me. Eureqa takes into account the total size of an
> equation when comparing different candidate models. It attempts to find the
> set of possible equations that are non-dominated in both error and size. The
> final results is a short list consisting of the most accurate equation for
> increasing equation sizes.
>
> This is closely related to degrees of freedom, but not exactly the same

That's very good, but I wonder whether we can perform automatic
outlier exclusion that way.  We would need to keep the confidence
interval, or at least the information necessary to derive it, accurate
in every step of the genetic beam search.  Since the confidence
intervals of extrapolation depend so heavily on the number of degrees
of freedom of the fit (along with the residual standard error) it's a
good idea to use a degree-of-freedom-adjusted F statistic instead of a
post-hoc combination of equation size and residual standard error, I
would think.  You might want to try it and see how it improves things.
 Confidence intervals, by representing the goodness of fit in the
original units and domain of the dependent variable, are tremendously
useful and sometimes make many kinds of tests which would otherwise be
very laborious easy to eyeball.

Being able to fit curves to one-to-many relations instead of strict
one-to-one functions appeals to those working in the imaging domain,
but not to as many traditional non-image statisticians. Regressing
functions usually results in well-defined confidence intervals, but
regressing general relations with derivatives produces confidence
intervals which can also be relations.  Trying to figure out a
spiral-shaped confidence interval probably appeals to astronomers more
than most people.  So I am proposing that, for R's contemplated
'syrfr' symbolic regression package, we do functions in a general
genetic beam search framework, Chillu and John Nash can do derivatives
in the new 'adinr' package, and then we can try to put them together,
extend the syrfr package with a parameter indicating to fit relations
with derivatives instead of functions, to try to replicate your work
on Eureqa using d.o.f-adjusted F statistics as a heuristic beam search
evaluation function.

Have you quantified the extent to which using the crossover rule in
the equation tree search is an improvement over mutation alone in
symbolic regression?  I am glad that Chillu and Dirk have already
supported that; there is no denying its utility.

Would you like to co-mentor this project?
http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2010:syrfr
I've already stepped forward, so you could do as much or as little as
you like if you wanted to co-mentor and Dirk agreed to that
arrangement.

Best regards,
James Salsman


> On Mon, Mar 8, 2010 at 2:49 AM, James Salsman 
> wrote:
>>
>> I meant that development on both a syrfr R package capable of
>> using either F statistics or parametric derivatives should proceed in
>> parallel with your work on such a derivatives package. You are right
>> that genetic algorithm search (and general best-first search --
>> http://en.wikipedia.org/wiki/Best-first_search -- of which genetic
>> algorithms are various special cases) can be very effectively
>> parallelized, too.
>>
>> In any case, thank you for pointing out Eureqa --
>> http://ccsl.mae.cornell.edu/eureqa -- but I can see no evidence there
>> or in the user manual or user forums that Eureqa is considering
>> degrees of freedom in its goodness-of-fit estimation.  That is a
>> serious problem which will typically result in invalid symbolic
>> regression.  I am sending this message also to Michael Schmidt so that
>> he might be able to comment on the extent to which Eureqa adjusts for
>> degrees of freedom in his fit evaluations.
>>
>> Best regards,
>> James Salsman
>>
>> On Sun, Mar 7, 2010 at 10:39 PM, Chidambaram Annamalai
>>  wrote:
>> >
>> >> If I understand your concern, you want to lay the foundation for
>> >> derivatives so that you can implement the search strategies described
>> >> in Schmidt and Lipson (2010) --
>> >> http://www.springerlink.com/content/l79v2183725413w0/ -- is that
>> >> right?
>> >
>> > Yes. Basically traditional "naive" error estimators or fitness functions
>> > fail miserably when used in SR with implicit equations because they
>> > immediately close in on "best" fits like f(x) = x - x and other trivial
>> > solutions. In such cases no amount of regularization and complexity
>> > penalizing methods will help since x - x is fairly

Re: [Rd] application to mentor syrfr package development for Google Summer of Code 2010

2010-03-10 Thread James Salsman
Michael,

Thanks for your reply with the information about the Eureqa API -- I
am forwarding it to the r-devel list below.

Dirk,

Will you please agree to referring to the syrfr package as symbolic
genetic algorithm regression of functions but not (yet) general
relations?  It would be best to refer to general relation regression
as a future package, something like 'syrr' and leave the
parametrization of the derivatives to that package.

May I please mentor, in consultation with Michael if necessary, work
on general function regressions while Chillu and John Nash work on the
derivative package necessary for general relation regressions?  Thank
you for your kind consideration.

Best regards,
James Salsman


On Tue, Mar 9, 2010 at 11:06 AM, Michael Schmidt  wrote:
> I think it's a great idea worth trying out. We have always done significance
> tests just on the final frontier of models as a post processing step. Moving
> this into the algorithm could focus the search more on significant higher
> quality solutions. One thing to beware of though is that using parsimony
> pressure just on the number of free parameters tends to focus the search on
> complex equations with no free parameters. So, some care should be taken how
> to implement it.
>
> We do see a measurable improvement using crossover versus just mutation on
> random test problems. Empirically, it doesn't seem necessary for all
> problems but also doesn't seem to ever inhibit the search.
>
> I didn't know that anyone was working on a SR package for R. Very cool! I'm
> happy to consult if you have any questions I can help with.
>
> You may also be interested that we just recently opened up the API for
> interacting with Eureqa servers:
> http://code.google.com/p/eureqa-api/
> If you know of anyone that might be interested in making a wrapper for R,
> please forward.
>
> Michael
>
>
> On Mon, Mar 8, 2010 at 5:45 PM, James Salsman 
> wrote:
>>
>> Michael,
>>
>> Thanks for your reply:
>>
>> On Mon, Mar 8, 2010 at 12:41 AM, Michael Schmidt 
>> wrote:
>> >
>> > Thanks for contacting me. Eureqa takes into account the total size of an
>> > equation when comparing different candidate models. It attempts to find
>> > the
>> > set of possible equations that are non-dominated in both error and size.
>> > The
>> > final results is a short list consisting of the most accurate equation
>> > for
>> > increasing equation sizes.
>> >
>> > This is closely related to degrees of freedom, but not exactly the same
>>
>> That's very good, but I wonder whether we can perform automatic
>> outlier exclusion that way.  We would need to keep the confidence
>> interval, or at least the information necessary to derive it, accurate
>> in every step of the genetic beam search.  Since the confidence
>> intervals of extrapolation depend so heavily on the number of degrees
>> of freedom of the fit (along with the residual standard error) it's a
>> good idea to use a degree-of-freedom-adjusted F statistic instead of a
>> post-hoc combination of equation size and residual standard error, I
>> would think.  You might want to try it and see how it improves things.
>>  Confidence intervals, by representing the goodness of fit in the
>> original units and domain of the dependent variable, are tremendously
>> useful and sometimes make many kinds of tests which would otherwise be
>> very laborious easy to eyeball.
>>
>> Being able to fit curves to one-to-many relations instead of strict
>> one-to-one functions appeals to those working in the imaging domain,
>> but not to as many traditional non-image statisticians. Regressing
>> functions usually results in well-defined confidence intervals, but
>> regressing general relations with derivatives produces confidence
>> intervals which can also be relations.  Trying to figure out a
>> spiral-shaped confidence interval probably appeals to astronomers more
>> than most people.  So I am proposing that, for R's contemplated
>> 'syrfr' symbolic regression package, we do functions in a general
>> genetic beam search framework, Chillu and John Nash can do derivatives
>> in the new 'adinr' package, and then we can try to put them together,
>> extend the syrfr package with a parameter indicating to fit relations
>> with derivatives instead of functions, to try to replicate your work
>> on Eureqa using d.o.f-adjusted F statistics as a heuristic beam search
>> evaluation function.
>>
>> Have you quantified the extent to which using the crossover rule in
>> the equation

[Rd] ranges and contiguity checking

2010-05-12 Thread James Bullard
Hi All,

I am interfacing to some C libraries (hdf5) and I have methods defined for
'[', these methods do hyperslab selection, however, currently I am
limiting slab selection to contiguous blocks, i.e., things defined like:
i:(i+k). I don't do any contiguity checking at this point, I just grab the
max and min of the range and them potentially do an in-memory subselection
which is what I am definitely trying to avoid. Besides using deparse, I
can't see anyway to figure out that these things (i:(i+k) and c(i, i+1,
..., i+k)) are different.

I have always liked how 1:10 was a valid expression in R (as opposed to
python where it is not by itself.), however I'd somehow like to know that
the thing was contiguous range without examining the un-evaluated
expression or worse, all(diff(i:(i+k)) == 1)

thanks, jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] ranges and contiguity checking

2010-05-12 Thread James Bullard
>> -Original Message-
>> From: r-devel-boun...@r-project.org
>> [mailto:r-devel-boun...@r-project.org] On Behalf Of Duncan Murdoch
>> Sent: Wednesday, May 12, 2010 11:35 AM
>> To: bull...@stat.berkeley.edu
>> Cc: r-de...@stat.math.ethz.ch
>> Subject: Re: [Rd] ranges and contiguity checking
>>
>> On 12/05/2010 2:18 PM, James Bullard wrote:
>> > Hi All,
>> >
>> > I am interfacing to some C libraries (hdf5) and I have
>> methods defined for
>> > '[', these methods do hyperslab selection, however, currently I am
>> > limiting slab selection to contiguous blocks, i.e., things
>> defined like:
>> > i:(i+k). I don't do any contiguity checking at this point,
>> I just grab the
>> > max and min of the range and them potentially do an
>> in-memory subselection
>> > which is what I am definitely trying to avoid. Besides
>> using deparse, I
>> > can't see anyway to figure out that these things (i:(i+k)
>> and c(i, i+1,
>> > ..., i+k)) are different.
>> >
>> > I have always liked how 1:10 was a valid expression in R
>> (as opposed to
>> > python where it is not by itself.), however I'd somehow
>> like to know that
>> > the thing was contiguous range without examining the un-evaluated
>> > expression or worse, all(diff(i:(i+k)) == 1)
>
> You could define a sequence class, say 'hfcSeq'
> and insist that the indices given to [.hfc are
> hfcSeq objects.  E.g., instead of
> hcf[i:(i+k)]
> the user would use
> hcf[hfcSeq(i,i+k)]
> or
> index <- hcfSeq(i,i+k)
> hcf[index]
> max, min, and range methods for hcfSeq
> would just inspect one or both of its
> elements.

I could do this, but I wanted it to not matter to the user whether or not
they were dealing with a HDF5Dataset or a plain-old matrix.

It seems like I cannot define methods on: ':'. If I could do that then I
could implement an immutable 'range' class which would be good, but then
I'd have to also implement: '['(matrix, range) -- which would be easy, but
still more work than I wanted to do.

I guess I was thinking that there is some inherent value in an immutable
native range type which is constant in time and memory for construction.
Then I could define methods on '['(matrix, range) and '['(matrix,
integer). I'm pretty confident this is more less what is happening in the
IRanges package in Bioconductor, but (maybe for the lack of support for
setting methods on ':') it is happening in a way that makes things very
non-transparent to a user. As it stands, I can optimize for performance by
using a IRange-type wrapper or I can optimize for code-clarity by killing
performance.

thanks again, jim





>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>>
>> You can implement all(diff(x) == 1) more efficiently in C,
>> but I don't
>> see how you could hope to do any better than that without
>> putting very
>> un-R-like restrictions on your code.  Do you really want to say that
>>
>> A[i:(i+k)]
>>
>> is legal, but
>>
>> x <- i:(i+k)
>> A[x]
>>
>> is not?  That will be very confusing for your users.  The problem is
>> that objects don't remember where they came from, only arguments to
>> functions do, and functions that make use of this fact mainly
>> do it for
>> decorating the output (nice labels in plots) or making error messages
>> more intelligible.
>>
>> Duncan Murdoch
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] difficulties with setMethod("[" and ...

2010-05-17 Thread James Bullard
Apologies if I am not understanding something about how things are being
handled when using S4 methods, but I have been unable to find an answer to
my problem for some time now.

Briefly, I am associating the generic '[' with a class which I wrote
(here: myExample). The underlying back-end allows me to read contiguous
slabs, e.g., 1:10, but not c(1, 10). I want to shield the user from this
infelicity, so I grab the slab and then subset in memory. The main problem
is with datasets with dim(.) > 2. In this case, the '...' argument doesn't
seem to be in a reasonable state. When it is indeed missing then it
properly reports that fact, however, when it is not missing it reports
that it is not missing, but then the call to: list(...) throws an argument
is missing exception.

I cannot imagine that this has not occurred before, so I am expecting
someone might be able to point me to some example code. I have attached
some code demonstrating my general problem ((A) and (B) below) as well as
the outline of the sub-selection code. I have to say that coding this has
proven non-trivial and any thoughts on cleaning up the mess are welcome.

As always, thanks for the help.

Jim

require(methods)

setClass('myExample', representation = representation(x = "array"))

myExample <- function(dims = c(1,2)) {
  a <- array(rnorm(prod(dims)))
  dim(a) <- dims
  obj <- new("myExample")
  o...@x <- a
  return(obj)
}

setMethod("dim", "myExample", function(x) return(dim(x...@x)))

functionThatCanOnlyGrabContiguous <- function(x, m, kall) {
  kall$x <- x...@x
  for (i in 1:nrow(m)) {
kall[[i+2]] <- seq.int(m[i,1], m[i,2])
  }
  print(as.list(kall))
  return(eval(kall))
}

setMethod("[", "myExample", function(x, i, j, ..., drop = TRUE) {
  if (missing(...)){
print("Missing!")
  }
  e <- list(...)
  m <- matrix(nrow = length(dim(x)), ncol = 2)

  if (missing(i))
m[1,] <- c(1, dim(x)[1])
  else
m[1,] <- range(i)

  if (length(dim(x)) > 1) {
if (missing(j))
  m[2,] <- c(1, dim(x)[2])
else
  m[2,] <- range(j)

k <- 3
while (k <= nrow(m)) {
  if (k-2 <= length(e))
m[k,] <- range(e[[k-2]])
  else
m[k,] <- c(1, dim(x)[k])
  k <- k + 1
}
  }
  kall <- match.call()
  d <- functionThatCanOnlyGrabContiguous(x, m, kall)

  kall$x <- d
  if (! missing(i)) {
kall[[3]] <- i - min(i) + 1
  }
  if (! missing(j)) {
kall[[4]] <- j - min(j) + 1
  } else {
if (length(dim(x)) > 1)
  kall[[4]] <- seq.int(1, dim(x)[2])
  }
  ## XXX: Have to handle remaining dimensions, but since I can't
  ## really get a clean '...' it is on hold.

  eval(kall)
})

## ### 1-D
m <- myExample(10)
m...@x[c(1,5)] == m[c(1, 5)]

## ### 2-D
m <- myExample(c(10, 10))
m...@x[c(1,5), c(1,5)] == m[c(1,5), c(1,5)]
m...@x[c(5, 2),] == m[c(5,2),]

## ### 3-D
m <- myExample(c(1,3,4))

## (A) doesn't work
m...@x[1,1:2,] == m[1,1:2,]

## (B) nor does this for different reasons.
m[1,,1]
m...@x[1,,1]

> sessionInfo()
R version 2.11.0 (2010-04-22)
x86_64-pc-linux-gnu

locale:
 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8   LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.11.0

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R from SVN fails to build on win32

2007-02-06 Thread James MacDonald
I get the following error when building R from the subversion server as
well as the latest tarball. I am on Windows XP, and I recently updated
my MinGW installation. It's quite possible I am doing something wrong,
but I am not sure what that might be.

making console.d from console.c
making dataentry.d from dataentry.c
making dynload.d from dynload.c
making edit.d from edit.c
making editor.d from editor.c
making embeddedR.d from embeddedR.c
making extra.d from extra.c
making opt.d from opt.c
making pager.d from pager.c
making preferences.d from preferences.c
making psignal.d from psignal.c
making rhome.d from rhome.c
making rui.d from rui.c
making run.d from run.c
making shext.d from shext.c
making sys-win32.d from sys-win32.c
making system.d from system.c
making dos_glob.d from dos_glob.c
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c console.c -o console.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c dataentry.c -o dataentry.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c dynload.c -o dynload.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c edit.c -o edit.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c editor.c -o editor.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c embeddedR.c -o embeddedR.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD -DLEA_MALLOC -c extra.c -o extra.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c opt.c -o opt.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c pager.c -o pager.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c preferences.c -o preferences.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c psignal.c -o psignal.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c rhome.c -o rhome.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c rui.c -o rui.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c run.c -o run.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c shext.c -o shext.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c sys-win32.c -o sys-win32.o
sys-win32.c: In function `do_system': sys-win32.c:183: warning: `hERR'
might be used uninitialized in this function gcc  -O3 -Wall -pedantic
-std=gnu99 -I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c system.c
-o system.o
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c dos_glob.c -o dos_glob.o
gcc -c -o e_pow.o e_pow.S
gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
-DR_DLL_BUILD  -c malloc.c -o malloc.o
windres  -I ../include -i dllversion.rc -o dllversion.o
c:\MinGW\bin\windres.exe: unknown format type `../include'
c:\MinGW\bin\windres.exe: supported formats: rc res coff make[3]: ***
[dllversion.o] Error 1
make[2]: *** [../../bin/R.dll] Error 2
make[1]: *** [rbuild] Error 2
make: *** [all] Error 2


Best,

Jim



James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Best practices - R CMD check and vignettes

2007-09-19 Thread James MacDonald
Hi,

I have a package that contains two vignettes that both use saved objects 
in the examples directory of the package. With previous versions of R I 
could have a code chunk in the vignette like this:

<>=
load("../examples/somedata.Rdata")
@

followed by a code chunk like

<>=
foo <- bar("data")
@

that simulated the actual reading in of the data (I use a saved object 
to limit package size).

This passed check with previous versions of R, but under R-2.6.0alpha, 
the vignettes are dumped into an inst directory in the .Rcheck 
directory, where Sweave() and texi2dvi() are run. Unfortunately, the 
above code chunks no longer work.

I can certainly hard code the first chunk to find the .Rdata file, but I 
have to imagine there is a much more elegant way to do this.

Any suggestions?

Best,

Jim


-- 
James W. MacDonald, MS
Biostatistician
UMCCC cDNA and Affymetrix Core
University of Michigan
1500 E Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Is rcompgen still recommended?

2008-02-18 Thread James MacDonald
I just built R-devel from source on OS X (Tiger), using the subversion 
sources. Running ./tools/rsync-recommended didn't download rcompgen. I 
checked

http://cran.r-project/src/contrib/2.7.0/Recommended

and indeed, this package is not there. If I try to install using 
install.packages I get

 > install.packages("rcompgen", type="source")
--- Please select a CRAN mirror for use in this session ---
Loading Tcl/Tk interface ... done
trying URL 'http://cran.fhcrc.org/src/contrib/rcompgen_0.1-17.tar.gz'
Content type 'application/x-gzip' length 29240 bytes (28 Kb)
opened URL
==
downloaded 28 Kb

* Installing *source* package 'rcompgen' ...
Error: Invalid DESCRIPTION file

Invalid Priority field.
Packages with priorities 'base' or 'recommended' or 'defunct-base' must
already be known to R.

See the information on DESCRIPTION files in section 'Creating R
packages' of the 'Writing R Extensions' manual.
Execution halted
ERROR: installing package DESCRIPTION failed
** Removing '/Users/jmacdon/R-devel/library/rcompgen'

The downloaded packages are in
/private/tmp/Rtmpb0Refs/downloaded_packages
Updating HTML index of packages in '.Library'
Warning message:
In install.packages("rcompgen", type = "source") :
   installation of package 'rcompgen' had non-zero exit status


I assume I am missing something obvious, but don't know what it is. Any 
pointers?

Best,

Jim



 > sessionInfo()
R version 2.7.0 Under development (unstable) (2008-02-18 r44516)
i386-apple-darwin8.11.1

locale:
C

attached base packages:
[1] stats graphics  grDevices datasets  utils methods   base

loaded via a namespace (and not attached):
[1] tcltk_2.7.0 tools_2.7.0
-- 
James W. MacDonald, MS
Biostatistician
UMCCC cDNA and Affymetrix Core
University of Michigan
1500 E Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] confusion about evaluation.

2008-07-20 Thread James Bullard
Hi All, I am confused about the following code. I thought that the  
problem stemmed from lazy evaluation and the fact that 'i' is never  
evaluated within the first lapply. However, I am then confused as to  
why it gets bound to the final element of the lapply. The environments  
of the returned functions are indeed different in both cases and each  
environment has a local binding for 'i' it just happens to be 3 --  
which I will say is wrong, but I await reeducation.


I looked for documentation concerning this, but I wasn't able to find  
anything -- I imagine that this must be documented somewhere as it  
appears like a reasonable thing to do.  Thank you as always for any  
insight. -- jim


## doesn't do what I think it should
x <- lapply(1:3, function(i) {
function() { i^2 }
})
sapply(1:3, function(i) x[[i]]())

[1] 9 9 9

## does what I expect
x <- lapply(1:3, function(i) {
print(i)
function() { i^2 }
})
[1] 1
[1] 2
[1] 3

sapply(1:3, function(i) x[[i]]())

[1] 1 4 9


> sessionInfo()
R version 2.7.1 Patched (2008-07-20 r46088)
i386-apple-darwin9.4.0

locale:
en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] NAMESPACE/DESCRIPTION and imports

2008-12-14 Thread James MacDonald
Hi,

Could someone point me to the relevant documentation that covers what should be 
in the DESCRIPTION file for packages that have functions imported via the 
NAMESPACE file? I have read the R Extensions manual, but I cannot find where it 
covers the DESCRIPTION file vis a vis importing from a namespace.

An example:

I have a package foo that uses two functions x and y from package bar. Both 
packages have namespaces, and I just want to import the functions from bar 
rather than attaching the package.

I put 

Imports: bar

in my DESCRIPTION file and

importFrom(bar, x, y)

in my NAMESPACE file.

I have a vignette that uses both x and y from bar, and when I run R CMD build 
foo, at the vignette building step it errors out because it cannot find 
function x. If I add

Depends: bar

to my DESCRIPTION file it all works, but bar is attached and it seems I have 
not accomplished what I wanted. I am obviously misunderstanding something but I 
don't know what.

Best,

Jim



James W. MacDonald, M.S.
Biostatistician
Hildebrandt Lab
8220D MSRB III
1150 W. Medical Center Drive
Ann Arbor MI 48109-0646
734-936-8662
**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] iconv.dll in Windows

2009-03-12 Thread James MacDonald
I recently built R-devel on Windows XP (sessionInfo below), and when loading 
libraries that require the iconv.dll was getting an error stating that 'This 
application has failed to start because iconv.dll was not found. Re-installing 
the application may fix this problem.'.

An R-2.8.1 that I installed using the Windows installer has this dll in 
R-2.8.1/bin, whereas in R-devel it is still in R-devel/src/gnuwin32/unicode. 
Moving the dll  to R-devel/bin alleviates the problem.

I built using the recent recommendations of P. Dalgaard  (make Rpwd.exe, make 
link-recommended, make all recommended). I don't see anything in the NEWS for 
this version, but maybe I missed something?

Best,

Jim

> sessionInfo()
R version 2.9.0 Under development (unstable) (2009-03-11 r48117) 
i386-pc-mingw32 

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=English_United 
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices datasets  utils methods   base 

other attached packages:
[1] XML_1.99-0
-- 

James W. MacDonald, M.S.
Biostatistician
Douglas Lab
5912 Buhl
1241 E. Catherine St.
Ann Arbor MI 48109-5618
734-615-7826


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] multiple packages using the same native code.

2006-03-15 Thread James Bullard
This might fall under the purview of bundles, but I could not find any 
example bundles which demonstrated what I am after.

I would like to construct two packages (A, B) which utilize a number of 
common C functions. The most straightforward way to do this is just copy 
the relevant .c and .h files from one src directory to the next, but 
this is tedious especially in the face of multiple developers and changes.

If I declare in the depends field that package A needs to be present in 
order to install package B this only enforces that package A has been 
installed correct? Is there a way to check whether the src of a package 
is available and to be able to compile against it (the source of a 
package does not seem to be installed by default so this might, in 
general, be impossible)? Linking against installed packages seems to be 
easier in the sense that I know that if a package is installed that uses 
native code the .so is available, but is there a makevars variable which 
I can use to tell R to add to its linking command upon R CMD INSTALL?

Does anyone have examples of configure scripts which are doing this by 
hand? I could see this as being a relatively painless addition for 
linking by mapping any dependency specified in the depends field (in 
DESCRIPTION) to additional dependencies in the list of directories to 
link against, but in terms of compiling I don't see an obvious solution 
besides writing it myself in the configure, but then it might make it 
much harder for the user to install.

Sorry if this is in the help manual - I have looked at places where I 
thought it might naturally be, but did not see anything.

Thanks in advance, jim

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] multiple packages using the same native code.

2006-03-16 Thread James Bullard
Seth, thanks for the advice. This solution seems like it might work, but 
then all errors occur at runtime rather than at compile time. This seems 
like I am exchanging one evil for another (run time segfaults versus 
code duplication) Lets say we have these three package A, B, and C 
defined more or less like this:

A/src/bar.c
int bar()
{
foo();
}

B/src/baz.c
int baz()
{
foo();
}

C/src/foo.c
int foo()
{
return 1;
}


Now, the only way I can see to do this is to copy foo.c into both src 
directories of package A and B. This is not exactly what anyone wants, 
but rather I'd rather just say that both package A and B depend on 
package C. If I put them in a bundle then can I expect that the src will 
always simultaneously be available? In this way I can easily modify the 
configure script to handle this, but if I have no way to depend on the 
presence of the code (ie. users could download and install packages 
separately even if it's a bundle) then it seems like there is no way to 
generally modify the configure file to do this.


thanks, jim





Seth Falcon wrote:

>Hi Jim,
>
>James Bullard <[EMAIL PROTECTED]> writes:
>  
>
>>I would like to construct two packages (A, B) which utilize a number of 
>>common C functions. The most straightforward way to do this is just copy 
>>the relevant .c and .h files from one src directory to the next, but 
>>this is tedious especially in the face of multiple developers and
>>changes.
>>
>>
>
>I'm not sure I understand what you are after.  One possible solution
>would be to create a third package 'C' that contains the common C
>code.  This would allow you to call C function defined in 'C' from the
>C code in 'A' or 'B'.
>
>Using a .onLoad hook and getNativeSymbolInfo(), you can pass C
>function pointers to the code in packages A and B.
>
>Suppose in 'C' you have a C function foo() that is registered in the
>usual manner so that it can be called by .Call or .C.
>
>Then in 'A' you could have (all untested, sorry, but hopefully it
>sketches the idea for you):
>
>A/src/A.c
>
>   static DL_FUNC C_foo;
>
>   void init_funcs_from_C(SEXP foo_info) {
>   C_foo = R_ExternalPtrAddr(foo_info);
>   }
>
>   void bar(int *x) {
>   ...
>   z = C_foo();
>   ...
>   }
>
>
>A/R/zzz.R
>
>   .onLoad <- function(libname, pkgname) {
>   foo_info <- getNativeSymbolInfo("foo", PACKAGE="C")
>   .Call("init_funcs_from_C", foo_info$address)
>   }
>
>
>+ seth
>
>__
>R-devel@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-devel
>
>  
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] write.table and segment fault?

2006-03-29 Thread David James
Hi,

I'm experiencing R segmentation faults on multiple versions of R
when writing out a particular data.frame:

   ## dd is a 44 by 3 data.frame

   load(url("http://stat.bell-labs.com/RS-DBI/download/dd.rda";))
   write.table(dd, file = "dd.csv", sep = ",", row.names = FALSE)

this occurs on

> sessionInfo()
  R version 2.2.0, 2005-10-06, i686-pc-linux-gnu

  attached base packages:
  [1] "methods"   "stats" "graphics"  "grDevices" "utils" "datasets"
  [7] "base"

> sessionInfo()
  R version 2.2.1, 2005-12-20, i686-pc-linux-gnu
 
  attached base packages:
  [1] "methods"   "stats" "graphics"  "grDevices" "utils" "datasets"
  [7] "base"

> sessionInfo()
  Version 2.3.0 alpha (2006-03-27 r37590)
  i686-pc-linux-gnu
 
  attached base packages:
  [1] "methods"   "stats" "graphics"  "grDevices" "utils" "datasets"
  [7] "base"
  > load("dd.rda")
  > write.table(dd, file = "dd.csv", sep=",", row.names = TRUE)
 
   *** caught segfault ***
  address 0x18, cause 'memory not mapped'
 
  Traceback:
   1: write.table(dd, file = "dd.csv", sep = ",", row.names = TRUE)
   
  Possible actions:
  1: abort (with core dump)
  2: normal R exit
  3: exit R without saving workspace
  4: exit R saving workspace
  Selection: 3

--
David

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Segfault with too many menu items on Rgui

2006-03-31 Thread James MacDonald
Hi all,

In the CHANGES file for R-2.3.0alpha, there is the following
statement:

winMenuAdd() now has no limits on the number of menus or items, and
names are now limited to 500 (not 50) bytes.

However, I can reproducibly get a segfault using this (admittedly
silly) example:

for( i in 1:5) winMenuAdd(paste("Test", letters[i], sep=""))
for(i in 1:5) for(j in 1:24) winMenuAddItem(paste("Test", letters[i],
sep=""), as.character(j), paste(rep(letters[j], 4), collapse=""))

This is probably almost never a problem, but many Bioconductor packages
have vignettes that are added to a 'Vignettes' menu item. If you load
enough of these packages you will get a segfault.

> version
   _  
platform   i386-pc-mingw32
arch   i386   
os mingw32
system i386, mingw32  
status alpha  
major  2  
minor  3.0
year   2006   
month  03 
day29 
svn rev37607  
language   R  
version.string Version 2.3.0 alpha (2006-03-29 r37607)

Best,

Jim



James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD check: non source files in src on (2.3.0 RC (2006-04-19 r37860))

2006-04-19 Thread James Bullard
Hello, I am having an issue with R CMD check with the nightly build of 
RC 2.3.0 (listed in the subject.)

The problem is this warning:

* checking if this is a source package ... WARNING
Subdirectory 'src' contains:
   README _Makefile
These are unlikely file names for src files.

In fact, they are not source files, but I do not see any reason why they 
cannot be there, or why I need to be warned of their presence. 
Potentially I could be informed of their presence, but that is another 
matter.

Now, I only get this warning when I do:

R CMD build affxparser
R CMD check -l ~/R-packages/ affxparser_1.3.3.tar.gz

If I do:

R CMD check -l ~/R-packages affxparser

I do not get the warning. Is this inconsistent, or is there rationale 
behind this? I think the warning is inappropriate, or at the least a
little restrictive. It seems as if I should be able to put whatever I 
want in there, especially the _Makefile as I like to build test programs
directly and I want to be able to build exactly what I check out from
my source code repository without having to copy files in and out.

The output from R CMD check is below. Any insight would be appreciated. 
As always thanks for your patience.

jim





 Directly on directory *


[EMAIL PROTECTED]:~/projects/bioc$ R CMD check -l ~/R-packages affxparser
* checking for working latex ... OK
* using log directory '/home/bullard/projects/bioc/affxparser.Rcheck'
* using Version 2.3.0 RC (2006-04-19 r37860)
* checking for file 'affxparser/DESCRIPTION' ... OK
* this is package 'affxparser' version '1.3.3'
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking whether package 'affxparser' can be installed ... OK
* checking package directory ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for syntax errors ... OK
* checking R files for library.dynam ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking Rd files ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking for CRLF line endings in C/C++/Fortran sources/headers ... OK
* checking for portable compilation flags in Makevars ... OK
* creating affxparser-Ex.R ... OK
* checking examples ... OK
* creating affxparser-manual.tex ... OK
* checking affxparser-manual.tex ... OK


*** On the R CMD build version *


[EMAIL PROTECTED]:~/projects/bioc$ R CMD check -l ~/R-packages/ 
affxparser_1.3.3.tar.gz
* checking for working latex ... OK
* using log directory '/home/bullard/projects/bioc/affxparser.Rcheck'
* using Version 2.3.0 RC (2006-04-19 r37860)
* checking for file 'affxparser/DESCRIPTION' ... OK
* this is package 'affxparser' version '1.3.3'
* checking package dependencies ... OK
* checking if this is a source package ... WARNING
Subdirectory 'src' contains:
   README _Makefile
These are unlikely file names for src files.
* checking whether package 'affxparser' can be installed ... OK
* checking package directory ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for syntax errors ... OK
* checking R files for library.dynam ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking Rd files ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking for CRLF line endings in C/C++/Fortran sources/headers ... OK
* checking for portable compilation flags in Makevars ... OK
* creating affxparser-Ex.R ... OK
* checking examples ... OK
* creating affxparser-manual.tex ... OK
* checking affxparser-manual.tex ... OK

WARNING: There was 1 warning, see
   /home/bullard/projects/bioc/affxparser.Rcheck/00check.log
for details

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Build error/zlib question

2006-09-28 Thread James MacDonald
Hi,

I am unable to build a package I maintain using a relatively current
build of R-2.4.0 alpha, whereas the package builds just fine on R-2.3.1.
Both versions of R were built from source. I'm hoping a guRu might be
able to give some help.

Some snippets from the build process:

R-2.3.1

  making DLL ...
gcc  -Ic:/R-2.3.1/src/extra/zlib -DHAVE_ZLIB -Ic:/R-2.3.1/include -Wall
-O2   -c read_cdffile.c -o read_cdffile.o
read_cdffile.c: In function `readQC':
read_cdffile.c:565: warning: unused variable `param_unit'
windres --include-dir c:/R-2.3.1/include  -i makecdfenv_res.rc -o
makecdfenv_res.o
gcc  -shared -s  -o makecdfenv.dll makecdfenv.def read_cdffile.o
makecdfenv_res.o  -Lc:/R-2.3.1/bin   -lR
  ... DLL made

R-2.4.0 beta

   making DLL ...
gcc  -Ic:/rw2040dev/src/extra/zlib -DHAVE_ZLIB -Ic:/rw2040dev/include 
-Wall -O2 -std=gnu99   -c read_cdffile.c -o read_cdffile.o
read_cdffile.c: In function `readQC':
read_cdffile.c:565: warning: unused variable `param_unit'
windres --include-dir c:/rw2040dev/include  -i makecdfenv_res.rc -o
makecdfenv_res.o
gcc  -shared -s  -o makecdfenv.dll makecdfenv.def read_cdffile.o
makecdfenv_res.o  -Lc:/rw2040dev/bin   -lR
read_cdffile.o:read_cdffile.c:(.text+0x42): undefined reference to
`gzgets'
read_cdffile.o:read_cdffile.c:(.text+0xf3): undefined reference to
`gzopen'
read_cdffile.o:read_cdffile.c:(.text+0x10f): undefined reference to
`gzgets'
read_cdffile.o:read_cdffile.c:(.text+0x140): undefined reference to
`gzrewind'
read_cdffile.o:read_cdffile.c:(.text+0x177): undefined reference to
`gzclose'
collect2: ld returned 1 exit status
make[3]: *** [makecdfenv.dll] Error 1
make[2]: *** [srcDynlib] Error 2
make[1]: *** [all] Error 2
make: *** [pkg-makecdfenv] Error 2

> version
   _
platform   i386-pc-mingw32  
arch   i386 
os mingw32  
system i386, mingw32
status alpha
major  2
minor  4.0  
year   2006 
month  09   
day10   
svn rev39242
language   R
version.string R version 2.4.0 alpha (2006-09-10 r39242)

TIA,

Jim

James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Compiling R 2.4.0 in ubuntu/linux

2006-10-11 Thread James Bullard


Gavin Simpson wrote:
> On Wed, 2006-10-11 at 17:58 -0400, T C wrote:
>> I'm not sure if this is the place to post this question, but, I am
>> having trouble compiling the source code. I do have a suitable C
>> compiler and f2c but I get this error when I run ./configure
>>
>> configure: error: --with-readline=yes (default) and headers/libs are
>> not available
>>
>> Any ideas? Thanks.
> 
> You need the readline development headers, which in general are found in
> -devel package available for your distribution. You might
> have readline installed, but clearly you don't have the development
> headers or they are installed in a non-standard place that the configure
> script can't find them.
> 
> I'm not familiar with Ubuntu, but on Fedora the package you need is
> readline-devel. Install it using Ubuntu's package manager and then try
> to recompile.

On ubuntu (dapper) the package is:

libreadline5-dev


> 
> And please *don't* post to both R-Help *and* R-devel; posting to one
> list is quite sufficient and only R-devel is the appropriate list for
> questions of this nature.
> 
> HTH
> 
> G
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Coping with non-standard evaluation in R program analysis

2018-01-02 Thread Evan James Patterson
Hello R experts,


I plan to develop a tool for dynamic analysis of R programs. I would like to 
trace function calls at runtime, capturing argument and return values. 
Following a suggestion made some time ago on this list, my high-level 
implementation strategy is to rewrite the AST, augmenting call expressions with 
pre-call and post-call shims to capture the arguments and return value, 
respectively.


I can think of only one fundamental conceptual obstacle to this approach: R 
functions are not necessarily referentially transparent. The arguments received 
by a function are not values but promises. They can be evaluated directly 
("standard evaluation"), after applying arbitrary syntactic transformations 
("non-standard evaluation", aka NSE), or not at all. Therefore, if you peek at 
the values of function arguments before evaluating the function, you risk 
altering the semantics of the program, possibly fatally.


I'm looking for general advice about how to cope with NSE in this context. I 
also have some specific questions:


1) Is it possible to determine whether a given function (primitive, in R, or 
external) uses NSE on some or all of its arguments?


2) Is it possible to inspect the promise objects received by functions, say to 
determine whether they have been evaluated, without actually evaluating them? 
The R manual is not encouraging in this area:


https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Promise-objects


Thank you,


Evan


[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD CHECK doens't run configure when testing install?

2011-07-29 Thread Alexander James Rickett
I'm trying to get ready to submit a package to CRAN, but in order for the 
package to install on OS X, I need to temporarily set an environment variable.  
I put this in the 'configure' script, and 'R CMD INSTALL MyPackage' works fine, 
but when I do 'R CMD CHECK MyPackage', and it tests installation, the configure 
script doesn't run and consequently the installation fails.  Should I be 
setting the variable another way?  It passes all the other checks, and it will 
install outside of check, so could I just submit it as is?

Thanks!
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Nested tracing with custom callback

2016-07-13 Thread Evan James Patterson

Hi all,

I would like to install a trace function that gets executed whenever *any* R 
function is called. In Python, for example, this functionality is provided by 
the `sys.settrace` function.

I am not aware of any public interface, at the R or C level, that can 
accomplish this. The `trace` function is inadequate because it does not support 
nested functions. The `Rprof` function provides only statistical profiling.

Any advice would be appreciated. I'm not afraid to dig into R's internals if 
that's what it takes.

Thanks,
Evan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Nested tracing with custom callback

2016-07-13 Thread Evan James Patterson
Hi Jeroen,

That was exactly what I was looking for. Thanks!

Evan


From: jeroeno...@gmail.com  on behalf of Jeroen Ooms 

Sent: Wednesday, July 13, 2016 4:04 AM
To: Evan James Patterson
Cc: r-devel@r-project.org
Subject: Re: [Rd] Nested tracing with custom callback
    
On Wed, Jul 13, 2016 at 5:20 AM, Evan James Patterson
 wrote:
>
> I would like to install a trace function that gets executed whenever *any* R 
> function is called. In Python, for example, this functionality is provided by 
> the `sys.settrace` function.

Maybe you can adapt from the covr package:
https://github.com/jimhester/covr/blob/master/vignettes/how_it_works.Rmd


jimhester/covr
github.com
covr - Test coverage reports for R


__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Process to Incorporate Functions from {parallely} into base R's {parallel} package

2020-11-06 Thread Balamuta, James Joseph
Hi all,

Henrik Bengtsson has done some fantastic work with {future} and, more 
importantly, greatly improved constructing and deconstructing a parallelized 
environment within R. It was with great joy that I saw Henrik slowly split off 
some functionality of {future} into {parallelly} package. Reading over the 
package’s README, he states:

> The functions and features added to this package are written to be backward 
> compatible with the parallel package, such that they may be incorporated 
> there later.
> The parallelly package comes with an open invitation for the R Core Team to 
> adopt all or parts of its code into the parallel package.

https://github.com/HenrikBengtsson/parallelly

I’m wondering what the appropriate process would be to slowly merge some 
functions from {parallelly} into the base R {parallel} package. Should this be 
done with targeted issues on Bugzilla for different fields Henrik has 
identified? Or would an omnibus patch bringing in all suggested modifications 
be preferred? Or is it best to discuss via the list-serv appropriate 
contributions?

Best,

JJB

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Preferred way to include internal data in package?

2014-08-04 Thread Keirstead, James E
Hi,

I’m developing a package and would like to include some data sets for internal 
use only, e.g. configuration parameters for functions.  What is the preferred 
way of doing this?  If I put them in data/, then R CMD check asks me to 
document them but I’d prefer it if they floated beneath the surface, without 
the users awareness.

Many thanks,
James
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Preferred way to include internal data in package?

2014-08-04 Thread Keirstead, James E
I saw that, but actually I was wondering if there was a more general method.  
I’d like to use plain text files if I can, instead of Rda files, since they’re 
easier to maintain (and it’s a small file).

On 4 Aug 2014, at 16:30, Jeroen Ooms  wrote:

>> I’m developing a package and would like to include some data sets for 
>> internal use only, e.g. configuration parameters for functions.  What is the 
>> preferred way of doing this?  If I put them in data/, then R CMD check asks 
>> me to document them but I’d prefer it if they floated beneath the surface, 
>> without the users awareness.
> 
> Perhaps in sysdata.rda. See "Writing R Extensions".

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R from SVN fails to build on win32

2007-02-06 Thread James W. MacDonald
Hi Professor Ripley,

Prof Brian Ripley wrote:
> On Tue, 6 Feb 2007, Prof Brian Ripley wrote:
> 
>> You have not told us which version of R you are trying to build.
>> But this looks like a problem with your bintools, as nothing has 
>> changed in that line for a long time.
>>
>> I am using binutils-2.17.50-20060824.  There was an update at the 
>> weekend, and I suspect that is broken because it says
>>
>>In addition it patches windres to allow use of spaces in filenames,
>>
>> Does altering the Makefile to have -I../include work?
> 
> 
> I am also having no problems with
> 
> [d:/R/svn/trunk/src/gnuwin32]% windres --version
> GNU windres 2.17.50 20070129

You were correct. I updated to binutils-2.17.50-20060824 and the 
compilation is now proceeding without error.

Thank you for the help!

Best,

Jim


> 
>>
>>
>> On Tue, 6 Feb 2007, James MacDonald wrote:
>>
>>> I get the following error when building R from the subversion server as
>>> well as the latest tarball. I am on Windows XP, and I recently updated
>>> my MinGW installation. It's quite possible I am doing something wrong,
>>> but I am not sure what that might be.
>>>
>>> making console.d from console.c
>>> making dataentry.d from dataentry.c
>>> making dynload.d from dynload.c
>>> making edit.d from edit.c
>>> making editor.d from editor.c
>>> making embeddedR.d from embeddedR.c
>>> making extra.d from extra.c
>>> making opt.d from opt.c
>>> making pager.d from pager.c
>>> making preferences.d from preferences.c
>>> making psignal.d from psignal.c
>>> making rhome.d from rhome.c
>>> making rui.d from rui.c
>>> making run.d from run.c
>>> making shext.d from shext.c
>>> making sys-win32.d from sys-win32.c
>>> making system.d from system.c
>>> making dos_glob.d from dos_glob.c
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c console.c -o console.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c dataentry.c -o dataentry.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c dynload.c -o dynload.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c edit.c -o edit.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c editor.c -o editor.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c embeddedR.c -o embeddedR.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD -DLEA_MALLOC -c extra.c -o extra.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c opt.c -o opt.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c pager.c -o pager.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c preferences.c -o preferences.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c psignal.c -o psignal.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c rhome.c -o rhome.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c rui.c -o rui.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c run.c -o run.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c shext.c -o shext.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c sys-win32.c -o sys-win32.o
>>> sys-win32.c: In function `do_system': sys-win32.c:183: warning: `hERR'
>>> might be used uninitialized in this function gcc  -O3 -Wall -pedantic
>>> -std=gnu99 -I../include -I. -DHAVE_CONFIG_H -DR_DLL_BUILD  -c system.c
>>> -o system.o
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c dos_glob.c -o dos_glob.o
>>> gcc -c -o e_pow.o e_pow.S
>>> gcc  -O3 -Wall -pedantic -std=gnu99 -I../include -I. -DHAVE_CONFIG_H
>>> -DR_DLL_BUILD  -c malloc.c -o malloc.o
>>> windres  -I ../include -i dllversion.rc -o dllversion.o
>>> c:\MinGW\bin\windres.exe: unknown format type `../include

Re: [Rd] xlsReadWrite Pro and embedding objects and files in Excel worksheets

2007-02-08 Thread James W. MacDonald
Have you looked at RDCOMClient? I would imagine you could do what you 
want with this package.

http://www.omegahat.org/RDCOMClient/

Best,

Jim

Hin-Tak Leung wrote:
> I don't know of any native xls read/write facility in R, either
> in core or as add-ons (I could be wrong), but if you want some source 
> code to scavenge on to build some R package out of it, there are two
> perl modules, Spreadsheet::ParseExcel and Spreadsheet::WriteExcel
> which are small enough to "read from front cover to back cover",
> so to speak, might be useful for reference and steal code from.
> 
> The other open-source packages which can read/write excel files
> are gnumeric and openoffice and probably too big to find one's way 
> around the source code to steal there :-).
> 
> Good luck.
> 
> HTL
> 
> Mark W Kimpel wrote:
> 
>>Hans-Peter and other R developers,
>>
>>How are you? Have you made any progess with embedding Url's in Excel?
>>
>>Well, I have been busy thinking of more things for you to do;)
>>
>>My colleagues in the lab are not R literate, and some are barely 
>>computer literate, so I give them everything in Excel workbooks. I have 
>>gradually evolved a system such that these workbooks have become 
>>compendia of my data, output, and methods. That, in fact, is why I 
>>bought the Pro version of xlsReadWritePro. I have been saving graphics 
>>as PDF files, then inserting them as object in Excel sheets.
>>
>>What I would like to be able to do is to embed objects (files) in sheets 
>>of a workbook directly from within R. I would also like to be able to 
>>save my current R workspace as an object embedded in a sheet so that in 
>>the future, if packages change, I could go back and recreate the 
>>analysis. I do not need to be able to manuipulate files that R has not 
>>created, like a PDF file from another user. I would, however, like to be 
>>able to save my graphics as PDF files inside a worksheet, even if it 
>>meant creating a  temp file or something.
>>
>>Before people begin talking about how MySQL or some other database could 
>>handle all that archiving, let me say that that is not what my 
>>colleagues want. They want a nice Excel file that they can take home on 
>>there laptops. One thing I like about worksheets is that they themselves 
>>can contain many embedded files, so it keeps our virtual desks neater 
>>and less confusing.
>>
>>Hans, if you could do this, it would be of tremendous benefit to me and 
>>hopefully a lot of people. R developers tend to think that all 
>>scientists are running Linux on 64-bit computers, but most biomedical 
>>researches still store date in Excel files. This won't solve everybody's 
>>needs, but it could be a start.
>>
>>Well, let me know what you think. I am cc'ing R-devel to see if any of 
>>those guys have ideas as well.
>>
>>Thanks,
>>Mark
>>
>>
>>
> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] rm error on Windows after R CMD INSTALL

2007-08-02 Thread James W. MacDonald
  Hi,

I seem to have buggered my windows box in such a way that R CMD INSTALL 
no longer works, and I can't figure out what is wrong (and Google 
doesn't seem to be my friend either...). When installing a package 
(doesn't matter what package), I get the following error.

installing to 'c:/R-2.5.1/library'


-- Making package hthgu133aprobe 
   adding build stamp to DESCRIPTION
   installing R files
   installing data files
rm: failed to get attributes of `/': No such file or directory
make[2]: *** [c:/R-2.5.1/library/hthgu133aprobe/data] Error 1
make[1]: *** [all] Error 2
make: *** [pkg-hthgu133aprobe] Error 2
*** Installation of hthgu133aprobe failed ***

Removing 'c:/R-2.5.1/library/hthgu133aprobe'


I'm using the R tools installed via the Rtools.exe installer, have the 
Path variable set up correctly, and I don't have Cygwin installed.

Any suggestions?

Best,

Jim


-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggesting \alias* for Rd files (in particular for S4 method documentation)

2007-08-30 Thread James W. MacDonald
I think Oleg makes a good point here, and I don't see how his suggestion 
would hide any documentation.

As an example, start R and then open the HTML help page, and go to the 
Category package. If you click on any one of

annotation,GOHyperGParams-method
categoryName,GOHyperGParams-method
conditional,GOHyperGParams-method
conditional<-,GOHyperGParams-method
GOHyperGParams-class
ontology,GOHyperGParams-method
ontology<-,GOHyperGParams-method
show,GOHyperGParams-method

You will be sent to the same help page, which contains the documentation 
for all those specific methods. The question here is do we really this 
many-to-one relationship in the HTML pages?

In general (Oleg notwithstanding), I think the HTML pages are used 
primarily by new users to R, and having such an overload on the index 
page for this package is IMO a disservice to these people. Having just 
one link, GOHyperGParams-class, or possibly an additional 
GOHyperGParams-methods would be much cleaner.

There already exists a mechanism for keeping internal methods from 
showing up in the HTML indices: adding \keyword{internal} to the end of 
the .Rd file. However, this hides all the \alias{} (and \name{}) 
entries, so won't do what Oleg wants unless you have two separate .Rd 
files, one containing the \alias{} names you want to show, and the other 
with the 'internal' keyword.

Best,

Jim



Martin Morgan wrote:
> Hi Oleg --
> 
> On the usefulness of write.image,Image,character-method, in the end I
> really want documentation on specific methods. Maybe the issue is one
> of presentation?
> 
> write.image
> Image,character-method
> Image,missing-method
> 
> or, in a little more dynamic world, a '+' in front of write.image to
> expand the methods list.
> 
> alias* is a little strange, because it implies you're writing
> documentation, but then hiding easy access to it! This is not a strong
> argument against introducing alias*, since no one is forced to use it.
> 
> It also suggests that your documentation is organized by generic,
> which might also be a bit unusual -- I typically have an object (e.g.,
> an Image) and wonder what can be done to it (e.g., write it to
> disk). This suggests associating method documentation with object
> documentation. Multiple dispatch might sometimes make this difficult
> (though rarely in practice?). Separately documenting the generic is
> also important.
> 
> Martin
> 
> Oleg Sklyar <[EMAIL PROTECTED]> writes:
> 
>> Hi,
>>
>> I do not know if everybody finds index pages of the html-formatted R
>> help useful, but I know I am at least not the only one who uses them
>> extensively to get the overview of functions and methods in a package
>> (even for my own package). Problems arise, however, if a lot of S4
>> methods need to be documented blowing out the index with (generally
>> irrelevant) entries like:
>>
>> write.image,Image,missing-method
>> write.image,Image,character-method
>>
>> instead of a simple "write.image". I also do not believe anyone really
>> does something like "help(write.image,Image,missing-method)" on the
>> command line, thus these structures are more for internal linking than
>> for users.
>>
>> Therefore, I would suggest to introduce a modification of the \alias
>> keyword, that would do all the same as the standard \alias keyword, yet
>> it would *hide* that particular entry from the index. Reasonable
>> construction could be something like \alias*{} yielding
>>
>> \alias{write.image}
>> \alias*{write.image,Image,missing-method}
>> \alias*{write.image,Image,character-method}
>>
>> Alternatively:
>>
>> \alias{write.image}
>> \alias[hide]{write.image,Image,missing-method}
>> \alias[hide]{write.image,Image,character-method}
>>
>> Any comments?
>>
>> For me, the current way around is to avoid usage sections with \S4method
>> all together, substituting them with pairs of
>>
>> \section{Usage}{\preformatted{
>> }}
>> \section{Arguments}{
>> }
>>
>> and putting all aliases marked above with * into internals, which is
>> definitely not the best way of going around documentation and
>> code/documentation mismatches.
>>
>> Best regards,
>> Oleg
>>
>> -- 
>> Dr. Oleg Sklyar * EBI-EMBL, Cambridge CB10 1SD, UK * +44-1223-464466
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Best practices - R CMD check and vignettes

2007-09-19 Thread James W. MacDonald
Thanks, Max. Right after sending that email I was frantically searching 
for the 'Retract' button on my mailer, but evidently there isn't one ;-D.

Anyway, I'm not sure the entire package is installed during either R CMD 
build or R CMD check, so I am not sure that will work. My ill-advised 
idea of hard-coding the path won't work either, since the path will be 
different for these two steps.

However, the obvious fix that occurred to me 1 nanosecond after hitting 
'send' was that the saved objects should just be in the /inst/doc 
directory with the .Rnw files instead of being in some other directory. 
Originally I had the raw data in the /examples directory so users of my 
package could practice using those data, but really I think the only use 
is for building the vignettes, so moving to /inst/doc appears the most 
reasonable course of action.

Best,

Jim



Kuhn, Max wrote:
> Jim,
> 
> But I'm not sure if this works if the package has not been installed (or
> it might find a version already installed in your library path), but
> give it a try:
> 
>load(system.file("examples", "somedata.Rdata", package = "PkgName"))
> 
> Max 
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of James MacDonald
> Sent: Monday, September 17, 2007 11:07 PM
> To: [EMAIL PROTECTED]
> Subject: [Rd] Best practices - R CMD check and vignettes
> 
> Hi,
> 
> I have a package that contains two vignettes that both use saved objects
> 
> in the examples directory of the package. With previous versions of R I 
> could have a code chunk in the vignette like this:
> 
> <>=
> load("../examples/somedata.Rdata")
> @
> 
> followed by a code chunk like
> 
> <>=
> foo <- bar("data")
> @
> 
> that simulated the actual reading in of the data (I use a saved object 
> to limit package size).
> 
> This passed check with previous versions of R, but under R-2.6.0alpha, 
> the vignettes are dumped into an inst directory in the .Rcheck 
> directory, where Sweave() and texi2dvi() are run. Unfortunately, the 
> above code chunks no longer work.
> 
> I can certainly hard code the first chunk to find the .Rdata file, but I
> 
> have to imagine there is a much more elegant way to do this.
> 
> Any suggestions?
> 
> Best,
> 
> Jim
> 
> 

-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Package Building and Name Space

2008-01-23 Thread James W. MacDonald


Duncan Murdoch wrote:
> On 1/23/2008 11:11 AM, Johannes Graumann wrote:
>> ... sorry for reposting this in a more appropriate forum than r.general ...
>>
>> Hello,
>>
>> I just don't get this and would appreciate if someone could write a line or
>> two: I'm trying to build this package and it stops installing after I add
>> the following to the NAMESPACES file:
>>
>>> importFrom(gsubfn,strapply)
>> The error during the package test is:
>>
>> Error in MyPackage::MyFunction :
>>   package 'MyPackage' has no name space and is not on the search path
>> Calls:  ...  -> switch -> sys.source -> eval ->
>> eval -> ::
>> Execution halted
>>
>> 'MyFunction' contains 'strapply' from gsubfn.
>>
>> Please tell me where I err.
> 
> The file is called NAMESPACE, not NAMESPACES.

Plus gsubfn doesn't have a NAMESPACE, so I don't think this will work 
anyway.

Best,

Jim


-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD build question

2006-09-22 Thread James W. MacDonald
On Solaris when my package is built, I get the following result:

creating vignettes ...Segmentation Fault - core dumped
OK

My question isn't why I get a segfault, but why does build return an OK 
after such an inauspicious event? Is build only supposed to error out if 
something more central to the package is off?

R CMD check does issue a warning that the vignette is missing, so maybe 
that is the intended result.

Best,

Jim




-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Build error/zlib question

2006-09-29 Thread James W. MacDonald
Prof Brian Ripley wrote:
> On Fri, 29 Sep 2006, Uwe Ligges wrote:
> 
>> 
>> 
>> James MacDonald wrote:
>> 
>>> Hi,
>>> 
>>> I am unable to build a package I maintain using a relatively
>>> current build of R-2.4.0 alpha, whereas the package builds just
>>> fine on R-2.3.1. Both versions of R were built from source. I'm
>>> hoping a guRu might be able to give some help.
>> 
>> 
>> The entry points are no longer exported, you probably have to
>> specify the zlib header files (which reside in R/src/extra/zlib).
> 
> 
> I think Uwe meant to say '-L$(RHOME)/src/extra/zlib -lz' is needed in
>  PKG_LIBS.

Thank you for the hint. This does allow the package to build.

> 
> Note that this package would not build in an installed version of R,
> so it did not 'builds just fine on R-2.3.1' as claimed.

I made no claim for this package building on an installed version; if 
you read the sentence right after the one you quote, you will notice 
that I state both versions of R were compiled from source.

> 
> However, this is all against the advice of 'Writing R Extensions',
> and the change is documented in the CHANGES file.  What you should
> really do is to supply your own zlib compiled to your own
> requirements.  If you make use of private entry points, expect
> self-imposed trouble!

Thanks for the advice. When I get the chance I will look into supplying 
such an animal with the package.

Best,

Jim


> 
>> 
>> Uwe Ligges
>> 
>> 
>>> Some snippets from the build process:
>>> 
>>> R-2.3.1
>>> 
>>> making DLL ... gcc  -Ic:/R-2.3.1/src/extra/zlib -DHAVE_ZLIB
>>> -Ic:/R-2.3.1/include -Wall -O2   -c read_cdffile.c -o
>>> read_cdffile.o read_cdffile.c: In function `readQC': 
>>> read_cdffile.c:565: warning: unused variable `param_unit' windres
>>> --include-dir c:/R-2.3.1/include  -i makecdfenv_res.rc -o 
>>> makecdfenv_res.o gcc  -shared -s  -o makecdfenv.dll
>>> makecdfenv.def read_cdffile.o makecdfenv_res.o  -Lc:/R-2.3.1/bin
>>> -lR ... DLL made
>>> 
>>> R-2.4.0 beta
>>> 
>>> making DLL ... gcc  -Ic:/rw2040dev/src/extra/zlib -DHAVE_ZLIB
>>> -Ic:/rw2040dev/include -Wall -O2 -std=gnu99   -c read_cdffile.c
>>> -o read_cdffile.o read_cdffile.c: In function `readQC': 
>>> read_cdffile.c:565: warning: unused variable `param_unit' windres
>>> --include-dir c:/rw2040dev/include  -i makecdfenv_res.rc -o 
>>> makecdfenv_res.o gcc  -shared -s  -o makecdfenv.dll
>>> makecdfenv.def read_cdffile.o makecdfenv_res.o
>>> -Lc:/rw2040dev/bin   -lR 
>>> read_cdffile.o:read_cdffile.c:(.text+0x42): undefined reference
>>> to `gzgets' read_cdffile.o:read_cdffile.c:(.text+0xf3): undefined
>>> reference to `gzopen' 
>>> read_cdffile.o:read_cdffile.c:(.text+0x10f): undefined reference
>>> to `gzgets' read_cdffile.o:read_cdffile.c:(.text+0x140):
>>> undefined reference to `gzrewind' 
>>> read_cdffile.o:read_cdffile.c:(.text+0x177): undefined reference
>>> to `gzclose' collect2: ld returned 1 exit status make[3]: ***
>>> [makecdfenv.dll] Error 1 make[2]: *** [srcDynlib] Error 2 
>>> make[1]: *** [all] Error 2 make: *** [pkg-makecdfenv] Error 2
>>> 
>>>> version
>>> 
>>> _ platform   i386-pc-mingw32 arch   i386 os
>>> mingw32 system i386, mingw32 status alpha major
>>> 2 minor  4.0 year   2006 month  09 day
>>> 10 svn rev39242 language   R version.string R version
>>> 2.4.0 alpha (2006-09-10 r39242)
>>> 
>>> TIA,
>>> 
>>> Jim
>>> 
>>> James W. MacDonald, M.S. Biostatistician Affymetrix and cDNA
>>> Microarray Core University of Michigan Cancer Center 1500 E.
>>> Medical Center Drive 7410 CCGC Ann Arbor MI 48109 734-647-5623
>>> 
>>> 
>>> ** 
>>> Electronic Mail is not secure, may not be read every day, and
>>> should not be used for urgent or sensitive issues.
>>> 
>>> __ 
>>> R-devel@r-project.org mailing list 
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
>> 
>> __ 
>> R-devel@r-project.org mailing list 
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
> 


-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] x86_64, acml-3.5.0-gfortran64 and lme4

2006-10-16 Thread James W. MacDonald
I am not encountering segfaults either, using earlier versions of 
gcc/gfortran64 (4.0.0) and acml (3.0.0) on a 64 bit build of R-2.4.0 on 
Fedora Core 4

Best,

Jim



McGehee, Robert wrote:
> I am not encountering segfaults on a 64-bit build of R 2.4.0 compiled
> with gcc and g77 3.4.5 and ATLAS 3.6.0 on a Red Hat Athlon64 system.
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Douglas Bates
> Sent: Monday, October 16, 2006 10:26 AM
> To: R Development Mailing List
> Subject: [Rd] x86_64, acml-3.5.0-gfortran64 and lme4
> 
> I am encountering segfaults when checking the lme4 package on an
> Athlon64 system if I use the acml blas.  R was built as a 64-bit
> application using the GCC 4.0.3 compiler suite including gfortran.
> The version of acml is 3.5.0 gfortran64.
> 
> I do not encounter the segfaults when I compile R with R's built-in
> BLAS.  The errors occur in the first example in lme4 in a call to
> lmer. It looks like they would occur in any call to lmer.  Running
> under the debugger shows that the segfault occurs in a call to dtrsm
> (a level-3 BLAS routine to solve a triangular system of equations)
> that is called from within a cholmod (sparse matrix library) routine.
> 
> Has anyone succeeded in running R CMD check on the lme4 package with
> accelerated BLAS?  I'm trying to pin down is this occurs only with
> ACML or also with Atlas and/or Goto's BLAS.
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> ______
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


-- 
James W. MacDonald
Affymetrix and cDNA Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] rm() deletes 'c' if c('a','b') is the argument (PR#9399)

2006-11-29 Thread James W. MacDonald
That's because you are not using rm() correctly. From the help page:

Arguments:

  ...: the objects to be removed, supplied individually and/or as a
   character vector

 list: a character vector naming objects to be removed.

So if you pass an unnamed argument, rm() will assume you have some 
objects in the .GlobalEnv with those names that you would like to 
remove. If you want to pass a character vector, you have to name it 
because the first argument is '...'.

 > a <- 1:5
 > b <- 2:10
 > c <- "not a good variable name"
 > rm(list=c("a","b"))
 > c
[1] "not a good variable name"

 > a <- 1:5
 > b <- 2:10
 > c <- "still not a good variable name"
 > rm(a,b)
 > c
[1] "still not a good variable name"

 > a <- 1:5
 > b <- 2:10
 > c <- "still not a good variable name"
 > rm("a","b")
 > c
[1] "still not a good variable name"


NB: 'c' is not a good variable name because you are masking an existing 
function.

An argument could be made that the explanation for the first argument is 
not very exact.

Best,

Jim



Steven McKinney wrote:
> Same behaviour seen on Apple Mac OSX 10.4.8 platform:
> 
> 
>>sessionInfo()
> 
> R version 2.4.0 Patched (2006-10-31 r39758) 
> powerpc-apple-darwin8.8.0 
> 
> locale:
> en_CA.UTF-8/en_CA.UTF-8/en_CA.UTF-8/C/en_CA.UTF-8/en_CA.UTF-8
> 
> attached base packages:
> [1] "methods"   "stats" "graphics"  "grDevices" "utils" "datasets"  
> "base" 
> 
> other attached packages:
> XML 
> "1.2-0" 
> 
>>ls()
> 
> [1] "getMonograph" "last.warning" "myfun"   
> 
>>a <- 1
>>b <- 2
>>c <- letters
>>a
> 
> [1] 1
> 
>>b
> 
> [1] 2
> 
>>c
> 
>  [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q" "r" 
> "s" "t" "u" "v" "w" "x" "y" "z"
> 
>>rm(c('a', 'b'))
>>a
> 
> Error: object "a" not found
> 
>>b
> 
> Error: object "b" not found
> 
>>c
> 
> .Primitive("c")
> 
>>ls()
> 
> [1] "getMonograph" "last.warning" "myfun"   
> 
>>a <- 1
>>b <- 2
>>d <- letters
>>ls()
> 
> [1] "a""b""d""getMonograph" 
> "last.warning" "myfun"   
> 
>>rm(c('a', 'b'))
> 
> Warning message:
> remove: variable "c" was not found 
> 
>>ls()
> 
> [1] "d""getMonograph" "last.warning" "myfun"   
> 
> 
> Steven McKinney
> 
> Statistician
> Molecular Oncology and Breast Cancer Program
> British Columbia Cancer Research Centre
> 
> email: [EMAIL PROTECTED]
> 
> tel: 604-675-8000 x7561
> 
> BCCRC
> Molecular Oncology
> 675 West 10th Ave, Floor 4
> Vancouver B.C. 
> V5Z 1L3
> Canada
> 
> 
> 
> 
> -Original Message-
> From: [EMAIL PROTECTED] on behalf of [EMAIL PROTECTED]
> Sent: Wed 11/29/2006 10:35 AM
> To: r-devel@stat.math.ethz.ch
> Cc: [EMAIL PROTECTED]
> Subject: [Rd] rm() deletes 'c' if c('a','b') is the argument (PR#9399)
>  
> Full_Name: Lixin Han
> Version: 2.4.0
> OS: Windows 2000
> Submission from: (NULL) (155.94.110.222)
> 
> 
> A character vector c('a','b') is supplied to rm().  As a result, 'c' is 
> deleted
> unintentionally.
> 
> 
>>a <- 1:5
>>b <- 'abc'
>>c <- letters
>>ls()
> 
> [1] "a" "b" "c"
> 
>>rm(c('a','b'))
>>ls()
> 
> character(0)
> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


-- 
James W. MacDonald, M.S.
Biostatistician
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
734-647-5623


**
Electronic Mail is not secure, may not be read every day, and should not be 
used for urgent or sensitive issues.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R] HTTP User-Agent header

2006-07-28 Thread James P. Howard, II
On 7/28/06, Seth Falcon <[EMAIL PROTECTED]> wrote:

> I have a rough draft patch, see below, that adds a User-Agent header
> to HTTP requests made in R via download.file.  If there is interest, I
> will polish it.

It looks right, but I am running under Windows without a compiler.

-- 
James P. Howard, II  -  [EMAIL PROTECTED]
http://jameshoward.us/  -  (443)-430-4050

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel