On Sun, 24 May 2009, Erin Hodgess wrote:
Dear R People:
I'm using R-2.8.1 on Ubuntu Jaunty jackalope (or whatever its name
is), and having a problem with the vignette function:
vignette("snowfall")
sh: /usr/bin/xpdf: not found
xpdf is configured to be the default PDF viewer but it is not
Thanks for your thoughts on this. I tried the approach for the grouping
variable using the "within" function. I looked at a subset of my data for
which I do not get the deparse error in lmer and compared the results. The
approach using the "within" function to form the grouping variable
unde
if you are on a .nix then in a terminal move to the directory that
contains the tar ball of the packages and type R CMD install
foo.tar.bz
hope this helps
Stephen Sefick
On Sun, May 24, 2009 at 5:50 PM, Le Wang wrote:
> Duncan Murdoch,
>
> Many thank you for your reply. I did try to download the
Dear R People:
I'm using R-2.8.1 on Ubuntu Jaunty jackalope (or whatever its name
is), and having a problem with the vignette function:
> vignette("snowfall")
sh: /usr/bin/xpdf: not found
>
Has anyone run into this, please?
Or is this for the Debian R list, please?
Thanks,
Erin
--
Erin Hodge
Oh, sorry, my own fault. I had set a HOME evironmental variable,
and there was a .Rprofile in it, and the setting of .Library.site was
overriden there.
Regards,
Leon
Leon Yee wrote:
> Dear R users,
>
> I am trying to customize the .Library.site in the file
> etc/Rprofile.site under Window
Dear R users,
I am trying to customize the .Library.site in the file
etc/Rprofile.site under Windows XP, but it seems that the setting
doesn't take effect. My setting is:
.Library.site <- "d:/site-library"
But after I lauched R 2.9.0, the value is always
"d:/PROGRA~1/R/R-29~1.0/site
Ted,
I just ran everything using the log of all variables. Much better analysis
and it doesn't violate the assumptions.
I'm still in the dark concerning the classification equation- other than the
fact that it now will contain log functions.
Thank you for you help,
Chase
Ted.Harding-2 wrote:
Is the output buffered on the RGUI? If so, uncheck it and see if the
problem clears up.
On Sun, May 24, 2009 at 3:24 PM, Bob Meglen wrote:
> I am attempting to use locator(n=2) to select the corners of several (5 in
> this case) rectangles on an image displayed in a JavaGD window. The returned
Duncan Murdoch,
Many thank you for your reply. I did try to download the older
versions from CRAN. But I am not quite sure how to compile the source
form. I tried using the option "install package(s) from local zip
files" in R, but it didn't work. It simply gave the following msg
> utils:::menuIn
On 24-May-09 20:32:06, cdm wrote:
> Dear Ted,
> Thank you for taking the time out to help me with this analysis.
> I'm seeing that I may have left out a crucial detail concerning
> this analysis. The ID measurement (interpubic distance) is a new
> measurement that has never been used in the field o
[Apologies -- I made an error (see at [***] near the end)]
On 24-May-09 19:07:46, Ted Harding wrote:
> [Your data and output listings removed. For comments, see at end]
>
> On 24-May-09 13:01:26, cdm wrote:
>> Fellow R Users:
>> I'm not extremely familiar with lda or R programming, but a recent
>
Dear R-helpers,
I have perfomed a BPCA (dudi.pca, between, package=ade4) and visualised the
result
in a scatter plot (s.class, package=ade4). I would like to now the unit of "d"
in the
scatterplot, which represents the size of the grid in the background of the
plot.
So to make it short, what
Dear Ted,
Thank you for taking the time out to help me with this analysis. I'm seeing
that I may have left out a crucial detail concerning this analysis. The ID
measurement (interpubic distance) is a new measurement that has never been
used in the field of ornithology (to my knowledge). The objec
On 24/05/2009 4:00 PM, Le Wang wrote:
Hi there,
Thanks for your time in advance.
I am using an add-on package from Cran. After I updated this package,
some of my programs don't work any more. I was wondering if there is
anything like version control so that I could use the older version of
that
Hi there,
Thanks for your time in advance.
I am using an add-on package from Cran. After I updated this package,
some of my programs don't work any more. I was wondering if there is
anything like version control so that I could use the older version of
that package; or if I could manually install
On 24/05/2009 2:50 PM, Tom H wrote:
On Sun, 2009-05-24 at 19:32 +0100, Tom H wrote:
polygon(
c(from, x, to),
c(min(y),y,min(y)),
col=areacol, lty=0
)
I guess my question should have been, I don't seem to be able to query
or "reflect" a
Thanks, both of these methods work great!
Dimitris Rizopoulos-4 wrote:
>
> one way is:
>
> mat <-
> matrix(c(rep(NA,10),1,2,3,4,5,6,7,8,9,10,10,9,8,NA,6,5,4,NA,2,1,rep(NA,10),1,2,3,4,NA,6,7,8,9,10),
>
> 10, 5)
> ind <- colSums(is.na(mat)) != nrow(mat)
> mat[, ind]
>
>
> I hope it helps.
>
Hi there,
Thanks for your time in advance.
I am using an add-on package from Cran. After I updated this package,
some of my programs don't work any more. I was wondering if there is
anything like version control so that I could use the older version of
that package; or if I could manually install
I am attempting to use locator(n=2) to select the corners of several (5 in
this case) rectangles on an image displayed in a JavaGD window. The returned
coords are used to draw labeled rectangles around the selected region. I
have tried several things to get this to work including sys.Sleep to c
Here is one way of doing it:
> moreThan <- ave(choose$code, choose$code, FUN=length)
> moreThan
[1] 2 2 4 4 4 4 2 2 6 6 6 6 6 6
> choose[moreThan > 2,]
firm year code
3 2 2000 11
4 2 2001 11
5 2 2002 11
6 2 2003 11
9 4 2001 13
104 2002 13
114 2003 13
1
[Your data and output listings removed. For comments, see at end]
On 24-May-09 13:01:26, cdm wrote:
> Fellow R Users:
> I'm not extremely familiar with lda or R programming, but a recent
> editorial review of a manuscript submission has prompted a crash
> course. I am on this forum hoping I could
Still not elegant, but I would split the string first:
spl.str <- unlist(strsplit("12345abcdefgh12345abcdefgh", ""))
Measure its length:
len.str <- length(spl.str)
Shift it:
spl.str <- c(spl.str[len.str], spl.str[seq(len.str - 1)])
Then paste it back together:
paste(spl.str, collapse="") # "h1
On Sun, 2009-05-24 at 19:32 +0100, Tom H wrote:
> polygon(
> c(from, x, to),
> c(min(y),y,min(y)),
> col=areacol, lty=0
> )
>
I guess my question should have been, I don't seem to be able to query
or "reflect"
Hi R collective,
I quite like the "curve" function because you can chuck a R function
into it, and see the graph in one line of R.
I had a google and found some threads for filling under the line;
http://tolstoy.newcastle.edu.au/R/e2/help/07/09/25457.html
However they seem to miss the point of
Hi Maura,
It is not "elegant" but may work.
actual.string<- "12345abcdefgh12345abcdefgh"
actual.string
actual.string<-paste(substr(actual.string,
nchar(actual.string),nchar(actual.string)),
substr(actual.string, 1,nchar(actual.string)-1), sep="")
actual.string
#in a looping
actual.string<-
Dear dxc13,
Here is another way:
index <- apply(mat, 2, function(x) !all(is.na(x)))
mat[ , index]
HTH,
Jorge
On Sun, May 24, 2009 at 12:53 PM, dxc13 wrote:
>
> useR's,
> I have a matrix given by the code:
> mat <-
>
> matrix(c(rep(NA,10),1,2,3,4,5,6,7,8,9,10,10,9,8,NA,6,5,4,NA,2,1,rep(NA,10)
Hi R helpers!
I have the following dataframe «choose»
choose<-data.frame(firm=c(1,1,2,2,2,2,3,3,4,4,4,4,4,4),
year=c(2000,2001,2000,2001,2002,2003,2000,2003,2001,2002,2003,2004,2005,2006),code=c(10,10,11,11,11,11,12,12,13,13,13,13,13,13))
choose
I want to subset it to obtain another one with t
one way is:
mat <-
matrix(c(rep(NA,10),1,2,3,4,5,6,7,8,9,10,10,9,8,NA,6,5,4,NA,2,1,rep(NA,10),1,2,3,4,NA,6,7,8,9,10),
10, 5)
ind <- colSums(is.na(mat)) != nrow(mat)
mat[, ind]
I hope it helps.
Best,
Dimitris
dxc13 wrote:
useR's,
I have a matrix given by the code:
mat <-
matrix(c(rep(NA,1
Some wavelet analysis experts have implemented periodic boundary conditions for
signals.
I need to implement a circular buffer. Something like:
"12345abcdefgh12345abcdefgh"
so that at each step the riightmost element is moved to the leftmost index and
everything else is properly shifted:
"h1
Fellow R Users:
I'm not extremely familiar with lda or R programming, but a recent editorial
review of a manuscript submission has prompted a crash cousre. I am on this
forum hoping I could solicit some much needed advice for deriving a
classification equation.
I have used three basic measuremen
useR's,
I have a matrix given by the code:
mat <-
matrix(c(rep(NA,10),1,2,3,4,5,6,7,8,9,10,10,9,8,NA,6,5,4,NA,2,1,rep(NA,10),1,2,3,4,NA,6,7,8,9,10),10,5)
This is a 10x5 matrix containing missing values. All columns except the
second contain missing values. I want to delete all columns that cont
On Sun, May 24, 2009 at 12:28 PM, kulwinder banipal wrote:
> It is for sure little complicated then a plain XML file. The format of
> binary file is according to XML schema. I have been able to get C parser
> going to get information from binary with one caveat - I have to manually
> read the XM
On 24/05/2009 12:21 PM, Sunita22 wrote:
Hello
I have 2 datasets say Data1 and Data2 both are of different dimesions.
Data1:
120 rows and 6 columns (Varname, Vartype, Labels, Description, )
The column Varname has 120 rows which has variable names such id, age,
gender,.so on
Data2:
125
Hello Duncan
Thank you so much it worked. I think I was doing it in a more complicated
way so I didnt get a solution
Thank you very much once again
Regards
Sunita
On Sun, May 24, 2009 at 9:59 PM, Duncan Murdoch wrote:
> On 24/05/2009 12:21 PM, Sunita22 wrote:
>
>> Hello
>>
>> I have 2 datasets
Um, this isn't an XML file. An XML file should look something like this:
>
Regards,
> Richie.
> Mathematical Sciences Unit
> HSL
It is for sure little complicated then a plain XML file. The format of binary
file is according to XML schema. I have been able to get C parser going to ge
Hello
I have 2 datasets say Data1 and Data2 both are of different dimesions.
Data1:
120 rows and 6 columns (Varname, Vartype, Labels, Description, )
The column Varname has 120 rows which has variable names such id, age,
gender,.so on
Data2:
12528 rows and 120 columns
The column names i
spencerg wrote:
Dear Frank, et al.:
Frank E Harrell Jr wrote:
Yes; I do see a normal distribution about once every 10 years.
To what do you attribute the nonnormality you see in most cases?
(1) Unmodeled components of variance that can generate errors
in interpretation if i
Dear Frank, et al.:
Frank E Harrell Jr wrote:
Yes; I do see a normal distribution about once every 10 years.
To what do you attribute the nonnormality you see in most cases?
(1) Unmodeled components of variance that can generate errors
in interpretation if ignored, even
Jarle Bjørgeengen wrote:
On May 24, 2009, at 3:34 , Frank E Harrell Jr wrote:
Jarle Bjørgeengen wrote:
Great,
thanks Manuel.
Just for curiosity, any particular reason you chose standard error ,
and not confidence interval as the default (the naming of the
plotting functions associates close
Hi Bill,
I'm about to take a look at this. If I understand the issue, very
long expressions for what I call the "grouping factor" of a random
effects term (the expressions on the right hand side of the vertical
bar) are encountering problems with deparse. I should have realized
that, any time on
OK, one more for the records. This script is now written so that it
uses Rscript instead of bash. The last line still does not work. I
don't know what make.packages.html requires, but apparently it
requires more than DESCRIPTION and 00Index.html in order to include a
package. (The line about bu
You might want to use cross-validation or the bootstrap to get error
estimates. Also, you should include the PCA step in the resampling
since it does add noise to the model.
Look at the pcaNNet and train functions in the caret package.
Also your code for the nnet would imply that you are predicti
On May 24, 2009, at 3:34 , Frank E Harrell Jr wrote:
Jarle Bjørgeengen wrote:
Great,
thanks Manuel.
Just for curiosity, any particular reason you chose standard
error , and not confidence interval as the default (the naming of
the plotting functions associates closer to the confidence
in
Paul Heinrich Dietrich wrote:
> Thank you much for the help, I will work on this over the weekend. Is there
> a way in Windows to connect R and Cream?
Perhaps, although I can't help... It would be necessary to write another
plugin: https://stat.ethz.ch/pipermail/r-help/2009-May/197794.html
_
Create your own using 'segments'.
On Sun, May 24, 2009 at 8:08 AM, Martin Ivanov wrote:
> Dear R users,
> I need a produce a plot with a single panel and a few lines on it. Each
> line represents a different data set. The line types must be "h", i.e.
> histogram like (or high-density) vertic
Jarle Bjørgeengen wrote:
Great,
thanks Manuel.
Just for curiosity, any particular reason you chose standard error , and
not confidence interval as the default (the naming of the plotting
functions associates closer to the confidence interval ) error
indication .
- Jarle Bjørgeengen
O
Yes. Most classical optimization methods (e.g. gradient-type, Newton-type) are
"local", i.e. they do not attempt to locate the global optimum. The primary
difficulty with global optimization is that there are no mathematical
conditions that characterize global optimum in multi-modal problems.
On 24-05-2009, at 14:24, Esmail wrote:
Hello Berend,
Berend Hasselman wrote:
Your function is not unimodal.
The help for optimize states: "If f is not unimodal, then
optimize() may approximate a local, but perhaps
non-global, minimum to the same accuracy."
Ah ok, I didn't read the manual
Great,
thanks Manuel.
Just for curiosity, any particular reason you chose standard error ,
and not confidence interval as the default (the naming of the plotting
functions associates closer to the confidence interval ) error
indication .
- Jarle Bjørgeengen
On May 24, 2009, at 3:02
Hi Mike and Gabor - thx for the help.
It seams I have made a mistake in my original question. While Mike's
solutions worked on the example data I provided, I now see my actual data is
> is(df100_lang$gray)
[1] "character" "vector" "data.frameRowLabels"
and the solution do
You define your own function for the confidence intervals. The function
needs to return the two values representing the upper and lower CI
values. So:
qt.fun <- function(x) qt(p=.975,df=length(x)-1)*sd(x)/sqrt(length(x))
my.ci <- function(x) c(mean(x)-qt.fun(x), mean(x)+qt.fun(x))
lineplot.CI(x.f
Thank you much for the help, I will work on this over the weekend. Is there
a way in Windows to connect R and Cream?
Jakson Alves de Aquino wrote:
>
>
> As pointed by JiHO the biggest disadvantage of using the plugin is that
> R is running through a pipe and consequently it is less interacti
Hello Berend,
Berend Hasselman wrote:
Your function is not unimodal.
The help for optimize states:
"If f is not unimodal, then optimize() may approximate a local, but perhaps
non-global, minimum to the same accuracy."
Ah ok, I didn't read the manual page carefully enough.
Do you know if
Try storing them as character strings rather than factors:
black_gray <- data.frame(black, gray, stringsAsFactors = FALSE)
Try this to view what you've got:
str(black_gray)
On Sun, May 24, 2009 at 7:15 AM, Andreas Christoffersen
wrote:
> Hi,
>
> In the example dataset below - how can I cahnge
Dear R users,
I need a produce a plot with a single panel and a few lines on it. Each line
represents a different data set. The line types must be "h", i.e. ‘histogram’
like (or ‘high-density’) vertical lines. The problem is that the vertical lines
comprising a plot line of type="h" are drawn fr
This should work:
levels(black_gray$gray)[levels(black_gray$gray)=='gray20'] = 'blue'
On Sun, May 24, 2009 at 8:15 AM, Andreas Christoffersen
wrote:
> Hi,
>
> In the example dataset below - how can I cahnge "gray20", to "blue"
>
> # data
> black <- rep(c("black","red"),10)
> gray <- rep(c("gray1
Hi. I started with a file which was a sparse 982x923 matrix and where the
last column was a variable to be predicted. I did principle component
analysis on it and arrived at a new 982x923 matrix.
Then I ran the code below to get a neural network using nnet and then wanted
to get a confusion matrix
JiHO wrote:
> On 2009-May-23 , at 20:16 , Jakson Alves de Aquino wrote:
>
>> Just a note: there is no need of before . Almost all key
>> bindings work in insert, normal and visual modes.
>
> Well, without switching to the non-insert mode, I find that pressing F9
> prints the commands in the fil
Hi,
In the example dataset below - how can I cahnge "gray20", to "blue"
# data
black <- rep(c("black","red"),10)
gray <- rep(c("gray10","gray20"),10)
black_gray <- data.frame(black,gray)
# none of this desperate things works
# replace(black_gray$gray, gray=="gray20","red")
# if(black_gray$gray==
Wow, thank you so much!
Where can I learn such creative approaches?
Best,
Holger
Romain Francois-2 wrote:
>
> Hollix wrote:
>> Hi there,
>>
>> say, I have 100 matrices (m1,m2,...,m100) which I want to combine in a
>> list.
>> The list, thus, shall contain the matrices as components.
>>
>> Is
Hello,
I have a bidimensional dataset and have succesfully (with the help of
this list btw.) the density of the data with smoothScatter.
I have just one other issue:
I would like to see that plot "normalized" in the X, that means I
would like to have a 2d density plot of Y|X to see where are the
c
Hollix wrote:
Hi there,
say, I have 100 matrices (m1,m2,...,m100) which I want to combine in a list.
The list, thus, shall contain the matrices as components.
Is it necessary to mention all 100 matrices in the list() command? I would
like to use just the first and last matrix or something simil
Hi there,
say, I have 100 matrices (m1,m2,...,m100) which I want to combine in a list.
The list, thus, shall contain the matrices as components.
Is it necessary to mention all 100 matrices in the list() command? I would
like to use just the first and last matrix or something similar.
Best,
Holg
On 24/05/2009, at 3:29 PM, echo_july wrote:
i have trouble in using spatstat package.
i want to simulate a community under the Strauss process,which
has a parameter gamma that controls interaction strength between
points,and Strauss process is defined only for 0 ¡Ügamma ¡Ü 1 and
i
64 matches
Mail list logo