,
for future coders' benefit?
S Ellison
***
This email and any attachments are confidential. Any use...{{dropped:8}}
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/ma
short term and hard to maintain in the long term.
So if you want to do something that will readily convert all combinations of
things like '12 w', '12W', '12wks', '3m 2d', 1wk 2d', '18d' etc, write that as
a stand-alone routi
library version of a
function rather than (say) a 'source'd user-space version that is under
development.
Being non-specific (ie omitting foo:::) means that test code would pick up the
development version in the current user environment by default. That's h
> When trying out some variations with `[.data.frame` I noticed some (to me)
> odd behaviour,
Not just in 'myfun' ...
plot(x=1:10, y=)
plot(x=1:10, y=, 10:1)
In both cases, 'y=' is ignored. In the first, the plot is for y=NULL (so not
'missing' y)
In the second case, 10:1 is positionally match
> > plot(x=1:10, y=)
> > plot(x=1:10, y=, 10:1)
> >
> > In both cases, 'y=' is ignored. In the first, the plot is for y=NULL (so not
> 'missing' y)
> > In the second case, 10:1 is positionally matched to y despite the
> > intervening
> 'missing' 'y='
> >
> > So it isn't just 'missing'; it's 'not
> Yes, I think all of that is correct. But y _is_ missing in this sense:
> > plot(1:10, y=)
> > ...
> Browse[2]> missing(y)
Although I said what I meant by 'missing' vs 'not present', it wasn't exactly
what missing() means. My bad.
missing() returns TRUE if an argument is not specified in the cal
FWIW, before all the examples are changed to data frame variants, I think
there's fairly good reason to have at least _one_ example that does _not_ place
variables in a data frame.
The data argument in lm() is optional. And there is more than one way to manage
data in a project. I personally d
> From: Thomas Yee [mailto:t@auckland.ac.nz]
>
> Thanks for the discussion. I do feel quite strongly that
> the variables should always be a part of a data frame.
This seems pretty much a decision for R core, and I think it's useful to have
raised the issue.
But I, er, feel strongly that
> As far as I can tell, the manual help page for ``sd``
>
> ?sd
>
> does not explicitly mention that the formula for the standard deviation is
> the so-called "Bessel-corrected" formula (divide by n-1 rather than n).
See Details, where it says
"Details:
Like 'var' this uses denominator n
Please forgive any mis-post, and do feel free to point me to a more
appropriate list if this isn't properly R-dev.
I have a package on R-forge that shows correct linux and other *nix
builds, but no windows build. The log for the patched version shows the
error below, which appears to be due to a l
Please forgive any mis-post, and do feel free to point me to a more
appropriate list if this isn't properly R-dev.
I have a package on R-forge that shows correct linux and other *nix
builds, but no windows build. The log for the patched version shows the
error below, which appears to be due to a l
could only tell you what the R-forge log told me.
Steve Ellison
>>> Prof Brian Ripley 28/03/2011 05:32 >>>
On Mon, 28 Mar 2011, S Ellison wrote:
> Please forgive any mis-post, and do feel free to point me to a more
> appropriate list if this isn't properly R-dev
This seems trivially fixable using something like
median.data.frame <- function(x, na.rm=FALSE) {
sapply(x, function(y, na.rm=FALSE) if(is.factor(y)) NA else
median(y, na.rm=na.rm), na.rm=na.rm)
}
>>> Paul Johnson 28/04/2011 06:20 >>>
On Wed, Apr 27, 2011 at 12:44 PM, Patrick Burns
wrote:
>
Further apologies to the list, but emails are still not getting to folk.
Duncan, you should have had a diff from me yesterday - if not, they've
fouled it up again...
--
View this message in context:
http://r.789695.n4.nabble.com/Unexpected-email-address-change-and-maybe-a-missing-manual-patch-
assigning the list to a
named variable and you don't need the name 'foo'.
Whether that is inconsistent is something of a matter of perspective.
Simplification applied as far as possible will always depend on what
simplification is possible for the particular
life - particularly one that starts small but is
subsequently refactored in growth - there may be no code left that was
contributed by the original developer.
Is there a point at which the original developer should not stay on the autho
> > Is there a point at which the original developer should not stay on the
> > author
> list?
>
> Authorship is not just about code. For example, there are functions in R
> which
> have been completely recoded, but the design and documentation remain.
> Copyright can apply to designs and there
they can - and should - contact the maintainer and say so.
So ask before removing someone from your citation. If they say 'no', don’t
remove them.
S Ellison
***
This email and any attachments are confidential. Any use, c
base::rowSums, the 'value' section of the help page says
" A numeric or complex array of suitable size, or a vector if the
result is one-dimensional. For the first four functions the
'dimnames' (or 'names' f
without messing around with new
functions that hide what you're doing.
Do you really have a case where 'test' is neither a single logical (that could
be used with 'if') nor a vector that can be readily replicated to the desired
length with 'rep'?
If not, I
e class.
>
> I agree... (and typically it does "own" the class)
If that is true and a good general guide, is it worth adding something to that
effect to 1.5.2 of "Writing R extensions"?
At present, nothing in 1.5.2 requires or recommends that
hat as sensible.
But that's a personal opinion. If these really are serious issues, somebody
needs to work up a consistent policy for R projects; otherwise we'll all be
walking on eggshells.
S Ellison
***
This email a
e with ideal choice of prior weights the scaled residuals are expected to
be IID Normal (under the normality assumption for a linear model) and without
scaling they aren't IID, so a Q-Q plot would be meaningless without scaling.
S Ellison
***
Transferred from R-help:
>> From: S Ellison
>> Subsetting using subset() is perhaps the most natural way of
>> subsetting data frames; perhaps a line or two and an example could
>> usefully be included in the 'Working with data frames' section of the
S lastname' form and that
removes the above Note*, but I then get a Note warning of maintainer change.
Is either Note going to get in the way of CRAN submission? (And if one of them
will, which one?)
S Ellison
*A minor aside: I couldn't find any documented reason for that, or indeed
version dependencies in
the imports or dependency lists is a good idea that individual package
maintainers could relatively easily manage, but I think freezing CRAN as a
whole or adopting single release cycles for CRAN would b
> This seems to be a common approach in other packages. However, one of my
> testers noted that if he put formula=y~. then w, ID, and site showed up in the
> model where they weren't supposed to be.
This is the documented behaviour for '.' in a formula - it means 'everything
else in the data obj
length 0 object was not considered.
I'd suggest logging it as an issue to for R-core to at least look at and either
to fix or to at least warn of in documentation.
S Ellison
***
This email and any attachments are confidential.
ge? The checks change over
time and with R version.
Finally, have you checked the log on R forge to see whether the check is
returning the same warnings/errors that you see locally?
S Ellison
***
This email and any attachments ar
> My understanding is that rationality is not the case in
> Europe - see e.g. http://en.wikipedia.org/wiki/Database_Directive.
I know we don't always see England as part of Europe, but 'ouch' anyway...
This is not copyright law. It is protection of databases, and that is a
different set of le
> Is it possible to keep from triggering the following warning
> when I check the package?
>
> summary:
> function(object, ...)
> summary.agriculture:
> function(x, analyte.names, results.col, analyte.col, by, det.col,
> [clip]
Part of the solution is to add ... to the legacy function; that
n/archived,
> sometimes 10s at a time.
... from which we deduce* that even the combination of the world's best
statisticians and a system entirely under their control does not guarantee an
unambiguous count.
Anyone out there still think statistics are easy?
Even so, 4000 plus or minus a
http://cran.r-project.org/web/packages/policies.html for FTP upload?
S Ellison
***
This email and any attachments are confidential. Any use...{{dropped:8}}
__
R-devel@r-project.org mai
d; the second attempt
indicates that I had misunderstood the error message. It wasnt indicating a
clash with an existing package name but a clash with an existing version
number. Probable explanation: I have failed to update my DESCRIPTION file.
Apologies for wasting folks' time. Again.
S Elliso
>
>
>
> 1. If I develop and distribute an R package which depends on another package
> that is released under the GPL, I have to release my package in a
> GPL-compatible way.
It is probably worth remembering that declaring a 'dependency' of package foo
on package bar in the R inter-package s
>> Some of us (including me) have strongly argued on several
>> occasions that global options() settings should *not* have an
effect
>> on anything "computing" ...
> ...
Global options are less of a problem where a function allows them to be
overridden by the user or programmer. If something is
>c() should have been put on the deprecated list a couple
>of decades ago
Don't you dare!
>Back to reality
phew! had me worried there.
c() is no problem at all for lists, Dates and most simple vector types;
why deprecate something solely because it doesn't behave for something
it doesn't claim
I was working on a permutation-like variant of the bootstrap for smaller
samples, and wanted to be able to get summary stats of my estimator
conveniently. mean() is OK as its a generic, so a mean.oddboot function gets
used automatically. But var, sd and others are not originally written as
gene
Brian,
>If we make functions generic, we rely on package writers implementing the
>documented
>semantics (and that is not easy to check). That was deemed to be too
>easy to get wrong for var().
Hard to argue with a considered decision, but the alternative facing increasing
numbers of package
Boxplot and bxp seem to have changed behaviour a bit of late (R 2.4.1). Or
maybe I am mis-remembering.
An annoying feature is that while at=3:6 will work, there is no way of
overriding the default xlim of 0.5 to n+0.5. That prevents plotting boxes on,
for example, interval scales - a useful thi
Full_Name: Steve Ellison
Version: 2.4.1
OS: Windows, Linux
Submission from: (NULL) (194.73.101.157)
bxp() allows specifcation of box locations with at=, but neither adjusts xlim=
to fit at nor does it respect xlim provided explicitly.
This is because bxp() now includes explicit xlim as c(0.5, n+
What is mystifying is that the issue was not present in previous versions, =
so appropriate code already existed.
However, I agree that there seem to be a couple of additional issues that =
I had missed. =20
I am perfectly happy to look at this again myself, though, and provide =
extended code; w
Brian,
> Note that ?bxp quite carefully says which graphical pars it does and does
> not accept, and 'xlim' is one it does not accept.
In my version at the time, bxp did not list which plot parameters it does not
accept. xlim was simply not mentioned at all. I can't easily see lack of a
mentio
CO2 is apparently a groupedData object; the formula attribute is described by
Pinheiro and Bates as a 'display formula'.
Perhaps reference to the nlme package's groupedData help would be informative?
>>> "Gabor Grothendieck" <[EMAIL PROTECTED]> 16/07/2007 16:18:37 >>>
Yes. That's what I was r
Rather than transport quantities of the Introduction to R (a perfectly
sensible title for a very good starting point, IMHO) would it not be
simpler and involve less maintenance to include a link or
cross-reference in the 'formula' help page to the relevant part of the
Introduction? If nothing else,
Plaintive squeak: Why the change?
Some OS's and desktops use the extension, so forgetting it causes
trouble. The new default filename keeps a filetype (as before) but the
user now has to type a filetype twice (once as the type, once as
extension) to get the same effect fo rtheir own filenames. An
>>> Michael Prager <[EMAIL PROTECTED]> 06/04/08 4:28 AM >>>
>There is much to be said for consistency (across platforms and
>functions) and stability (across versions) in software.
I could not agree more. But while consistency is an excellent reason
for making the patch consistent across platfor
nsion default were suggestions for possible future tidying-up
in subsequent releases.
Steve E
>>> "S Ellison" <[EMAIL PROTECTED]> 06/04/08 12:44 PM >>>
>>> Michael Prager <[EMAIL PROTECTED]> 06/04/08 4:28 AM >>>
>There is much to be sai
?text says
"'adj' allows _adj_ustment of the text with respect to '(x,y)'.
Values of 0, 0.5, and 1 specify left/bottom, middle and
right/top,
respectively."
But it looks like 0, 1 specify top, bottom respectively in the y
direction.
plot(1:4)
text(2,2, "adj=c(0,0)", adj=c(0,0))
text
Yup; you're all right - it IS consistent (and I'd even checked the x-adj
and it did what I expected!!). It's just that ?text is talking about the
position of the 'anchor' point in the text region rather than the
subsequent location of the centre of the text.
Anyway; if anyone is considering a mino
I had the same normalizePath error recently on a new laptop, with a fresh
install of R 2.8.1 and an attempt to install lme4. First attempt:
package 'Matrix' successfully unpacked and MD5 sums checked
Error in normalizePath(path) :
path[1]: The system cannot find the file specified
Second attem
51 matches
Mail list logo