В Tue, 12 Apr 2022 15:59:35 +1200
Tiffany Vidal пишет:
> devtools::install_github("MikkoVihtakari/ggOceanMapsData")
>
> Error in assign(".popath", popath, .BaseNamespaceEnv) :
> cannot change value of locked binding for '.popath'
> Calls: local ... eval.parent -> eval -> eval -> eval -> eval ->
aggregate(), tapply(), do.call(), rbind() (etc.) are extremely useful
functions that have been available in R for a long time. They remain
useful regardless what plotting approach you use - base graphics,
lattice or the more recent ggplot.
Philip
On 22/02/2017 8:40 AM, C W wrote:
Hi Carl,
Hi Carl,
I have not fully learned dplyr, but it seems harder than tapply() and the
?apply() family in general.
Almost every ggplot2 data I have seen is manipulated using dplyr. Something
must be good about dplyr.
aggregate(), tapply(), do.call(), rbind() will be sorely missed! :(
Thanks!
On Tu
Hi
I have found that:A) Hadley's new book to be wonderful on how to use dplyr,
ggplot2 and his other packages. Read this and using as a reference saves major
frustration.
b) Data Camps courses on ggplot2 are also wonderful. GGPLOT2 has more
capability than I have mastered or needed. To be a
Just. Don't. Do. This. (Hint: Threading mail readers.)
On 21 Feb 2017, at 03:53 , C W wrote:
> Thanks Hadley!
>
> While I got your attention, what is a good way to get started on ggplot2? ;)
--
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000
I suspect Hadley would recommend reading his new book, R for Data Science
(r4ds.had.co.nz), in particular Chapter 3. You don't need plyr, but it won't
take long before you will want to be using dplyr and tidyr, which are covered
in later chapters.
--
Sent from my phone. Please excuse my brevity
Thanks Hadley!
While I got your attention, what is a good way to get started on ggplot2? ;)
My impression is that I first need to learn plyr, dplyr, AND THEN ggplot2.
That's A LOT!
Suppose i have this:
iris
iris2 <- cbind(iris, grade = sample(1:5, 150, replace = TRUE))
iris2
I want to have some
> On Feb 20, 2017, at 8:12 AM, Hadley Wickham wrote:
>
> On Sun, Feb 19, 2017 at 3:01 PM, David Winsemius
> wrote:
>>
>>> On Feb 19, 2017, at 11:37 AM, C W wrote:
>>>
>>> Hi R,
>>>
>>> I am a little confused by the data.table package.
>>>
>>> library(data.table)
>>>
>>> df <- data.frame(
On Sun, Feb 19, 2017 at 3:01 PM, David Winsemius wrote:
>
>> On Feb 19, 2017, at 11:37 AM, C W wrote:
>>
>> Hi R,
>>
>> I am a little confused by the data.table package.
>>
>> library(data.table)
>>
>> df <- data.frame(w=rnorm(20, -10, 1), x= rnorm(20, 0, 1), y=rnorm(20, 10, 1),
>> z=rnorm(20, 20
> On Feb 19, 2017, at 11:37 AM, C W wrote:
>
> Hi R,
>
> I am a little confused by the data.table package.
>
> library(data.table)
>
> df <- data.frame(w=rnorm(20, -10, 1), x= rnorm(20, 0, 1), y=rnorm(20, 10, 1),
> z=rnorm(20, 20, 1))
>
> df <- data.table(df)
df <- setDT(df) is preferred.
Hi R,
I am a little confused by the data.table package.
library(data.table)
df <- data.frame(w=rnorm(20, -10, 1), x= rnorm(20, 0, 1), y=rnorm(20, 10, 1),
z=rnorm(20, 20, 1))
df <- data.table(df)
#drop column w
df_1 <- df[, w := NULL] # I thought you are supposed to do: df_1 <- df[, -w]
df_2
Good evening! I'm running into some surprising behavior with dlnorm() and
trying to understand it.
To set the stage, I'll plot the density and overlay a normal distribution.
This works exactly as expected; the two graphs align quite closely:
qplot(data=data.frame(x=rnorm(1e5,4,2)),x=x,stat='dens
I've just reread my answer and it's not very clear. Not at all. Inline.
Em 24-09-2012 18:34, Rui Barradas escreveu:
Hello,
Inline.
Em 24-09-2012 15:31, Bazman76 escreveu:
Thanks Rui Barrudas and Peter Alspach,
I understand better now:
x<-matrix(c(1,0,0,0,2,0,0,0,2),nrow=3)
y<-matrix(c(7,8,9
Hello,
Inline.
Em 24-09-2012 15:31, Bazman76 escreveu:
Thanks Rui Barrudas and Peter Alspach,
I understand better now:
x<-matrix(c(1,0,0,0,2,0,0,0,2),nrow=3)
y<-matrix(c(7,8,9,1,5,10,1,1,0),nrow=3)
z<-matrix(c(0,1,0,0,0,0,6,0,0),nrow=3)
x[z]<-y[z]
viewData(x)
produces an x matrix
7
Thanks Rui Barrudas and Peter Alspach,
I understand better now:
x<-matrix(c(1,0,0,0,2,0,0,0,2),nrow=3)
y<-matrix(c(7,8,9,1,5,10,1,1,0),nrow=3)
z<-matrix(c(0,1,0,0,0,0,6,0,0),nrow=3)
x[z]<-y[z]
viewData(x)
produces an x matrix
7 0 0
0 2 0
0 10 2
which makes sense the first el
Hello,
It is pretty basic, and it is deceptively simple. The worst of all :)
When you index a matrix 'x' by another matrix 'z' the index can be a
logical matrix of the same dimensions or recyclable to the dims of 'x',
it can be a matrix with only two columns, a row numbers column and a
column
first element of × and y (repeatedly).
Hope this helps
Peter Alspach
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Bazman76
Sent: Monday, 24 September 2012 8:53 a.m.
To: r-help@r-project.org
Subject: [R] Confused by code?
x<
x<-matrix(c(1,0,0,0,1,0,0,0,1),nrow=3)
> y<-matrix(c(0,0,0,1,0,0,1,1,0),nrow=3)
> z<-matrix(c(0,1,0,0,1,0,1,0,0),nrow=3)
> x[z]<-y[z]
The resultant matrix x is all zeros except for the last two diagonal cells
which are 1's.
While y is lower triangualr 0's with the remaining cells all ones.
I real
Hello,
I'm working on a Cox Proportional Hazards model for a cancer data set that has
missing values for the categorical variable "Grade" in less than 10% of the
observations. I'm not a statistician, but based on my readings of Frank
Harrell's book it seems to be a candidate for using multiple
use is.null for the test
if (is.null(indexSkipped))
Sent from my iPad
On May 22, 2012, at 2:10, Alaios wrote:
> Dear all,
> I have a code that looks like the following (I am sorry that this is not a
> reproducible example)
>
>
> indexSkipped<-NULL
>
>
>
> code Skipped that might
Dear all,
I have a code that looks like the following (I am sorry that this is not a
reproducible example)
indexSkipped<-NULL
code Skipped that might alter indexSkipped
if (length(indexSkipped)==0)
spatial_structure<-spatial_structures_from_measurements(DataList[[i]]$L
Hi
>
> This is copy & paste from my session:
>
> > xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class
> > dim(xyz)<-c(length(xyz)/2,2)
> >
> > allobj<-function(){
> + xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class;
> + dim(xyz)<-c(length(xyz)/2,2);
> + return(xyz)
> + }
> > xyz
>
Short answer, environments -- ls() looks (by default) in its current
environment, which is not the same as the global environment when
being called inside a function.
This would (I think) give the same answer but I haven't checked it. :
> allobj<-function(){
+ xyz<-as.vector(c(ls(.GlobalEnv),as.m
Sorry, just checked it and you need to add ".GlobalEnv" to both ls() calls.
Michael
On Mon, Feb 20, 2012 at 10:17 AM, R. Michael Weylandt
wrote:
> Short answer, environments -- ls() looks (by default) in its current
> environment, which is not the same as the global environment when
> being cal
On Feb 20, 2012, at 10:07 AM, Ajay Askoolum wrote:
This is copy & paste from my session:
xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class
dim(xyz)<-c(length(xyz)/2,2)
allobj<-function(){
+ xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class;
+ dim(xyz)<-c(length(xyz)/2,2);
+ return(x
This is copy & paste from my session:
> xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class
> dim(xyz)<-c(length(xyz)/2,2)
>
> allobj<-function(){
+ xyz<-as.vector(c(ls(),as.matrix(lapply(ls(),class;
+ dim(xyz)<-c(length(xyz)/2,2);
+ return(xyz)
+ }
> xyz
[,1] [,2]
On Jan 27, 2012, at 17:18 , R. Michael Weylandt wrote:
> It doesn't have anything to do with attach (which is naughty in other ways!)
> rather it's the internal representation of categorical variables (R speak:
> factors) that store each level as an integer for memory efficiency but print
> t
It doesn't have anything to do with attach (which is naughty in other ways!)
rather it's the internal representation of categorical variables (R speak:
factors) that store each level as an integer for memory efficiency but print
things with string levels so they look nice to the user.
You'll
I am confused whether Student's sleep data "show the effect of two
soporific drugs" or Control against Treatment (one drug). The reason
is the next:
> require(stats)
> data(sleep)
> attach(sleep)
> extra[group==1]
numeric(0)
> group
[1] Ctl Ctl Ctl Ctl Ctl Ctl Ctl Ctl Ctl Ctl Trt Trt Trt Trt Trt T
-Original Message-
From: Jim Lemon [mailto:j...@bitwrit.com.au]
Sent: 14 November 2011 13:39
To: Prasanth V P
Cc: r-help@r-project.org
Subject: Re: [R] Confused with an error message related to "plotrix"
library in the newer versions of R.
On 11/14/2011 05:59 PM, Prasanth V P
On 11/14/2011 05:59 PM, Prasanth V P wrote:
require(plotrix)
xy.pop<- c(17,15,13,11,9,8,6,5,4,3,2,2,1,3)
xx.pop<- c(17,14,12,11,11,8,6,5,4,3,2,2,2,3)
agelabels<- c("0-4","5-9","10-14","15-19","20-24","25-29","30-34",
"35-39","40-44","45-49","50-54","55-59","60-64","65+")
xycol<-color.gr
Dear R Users,
Greetings!
I am confused with an error message related to "plotrix" library in the
newer versions of R.
I used to run an R script without fail in the earlier versions (R 2.8.1) of
R; but the same script is now throwing up an error message in the newer
versions (Now I have R 2.1
On Jul 7, 2011, at 10:17 PM, Gang Chen wrote:
Thanks for the help! Are you sure R version plays a role in this
case? My R version is 2.13.0
I'm not sure, but my version is 2.13.1
Your suggestion prompted me to look into the help content of ifelse,
and a similar example exists there:
Thanks for the help! Are you sure R version plays a role in this case? My R
version is 2.13.0
Your suggestion prompted me to look into the help content of ifelse, and a
similar example exists there:
x <- c(6:-4)
sqrt(x) #- gives warning
sqrt(ifelse(x >= 0, x, NA)) # no warning
On Jul 7, 2011, at 8:52 PM, David Winsemius wrote:
On Jul 7, 2011, at 8:47 PM, Gang Chen wrote:
I define the following function to convert a t-value with degrees
of freedom
DF to another t-value with different degrees of freedom fullDF:
tConvert <- function(tval, DF, fullDF) ifelse(DF>=1,
I define the following function to convert a t-value with degrees of freedom
DF to another t-value with different degrees of freedom fullDF:
tConvert <- function(tval, DF, fullDF) ifelse(DF>=1, qt(pt(tval, DF),
fullDF), 0)
It works as expected with the following case:
> tConvert(c(2,3), c(10,12)
On Jul 7, 2011, at 8:47 PM, Gang Chen wrote:
I define the following function to convert a t-value with degrees of
freedom
DF to another t-value with different degrees of freedom fullDF:
tConvert <- function(tval, DF, fullDF) ifelse(DF>=1, qt(pt(tval, DF),
fullDF), 0)
It works as expected wi
On 2011-02-16 09:42, Sam Steingold wrote:
Description:
'lapply' returns a list of the same length as 'X', each element of
which is the result of applying 'FUN' to the corresponding element
of 'X'.
I expect that when I do
lapply(vec,f)
f would be called _once_ for each compon
Description:
'lapply' returns a list of the same length as 'X', each element of
which is the result of applying 'FUN' to the corresponding element
of 'X'.
I expect that when I do
> lapply(vec,f)
f would be called _once_ for each component of vec.
this is not what I see:
parse.num
On 07-Feb-11 08:18:49, Joel wrote:
> Hi
> Im confused by one thing, and if someone can explain it I would be a
> happy
>
>> rev(strsplit("hej",NULL))
> [[1]]
> [1] "h" "e" "j"
>
>> lapply(strsplit("hej",NULL),rev)
> [[1]]
> [1] "j" "e" "h"
>
> Why dossent the first one work? What is it in R tha
On 2011-02-07 00:18, Joel wrote:
Hi
Im confused by one thing, and if someone can explain it I would be a happy
rev(strsplit("hej",NULL))
[[1]]
[1] "h" "e" "j"
lapply(strsplit("hej",NULL),rev)
[[1]]
[1] "j" "e" "h"
Why dossent the first one work? What is it in R that "fails" so to say tha
Hi
Im confused by one thing, and if someone can explain it I would be a happy
> rev(strsplit("hej",NULL))
[[1]]
[1] "h" "e" "j"
> lapply(strsplit("hej",NULL),rev)
[[1]]
[1] "j" "e" "h"
Why dossent the first one work? What is it in R that "fails" so to say that
you need to use lapply for it to
Hey,
I only got the output once cuz I was returning from the function at the end
of one loop.
I set that right and I have printed the values.
function being used by me now is:
function(x)
{
for(i in 1:length(x))
{
print(names(x[i]))
print(myets(x[[i]]))
}
}
where myets is my customized exponentia
On Jun 25, 2010, at 7:09 AM, phani kishan wrote:
On Fri, Jun 25, 2010 at 1:54 PM, Paul Hiemstra
wrote:
On 06/25/2010 10:02 AM, phani kishan wrote:
Hey,
I have a data frame x which consists of say 10 vectors. I
essentially want
to find out the best fit exponential smoothing for each of
On Fri, Jun 25, 2010 at 1:54 PM, Paul Hiemstra wrote:
> On 06/25/2010 10:02 AM, phani kishan wrote:
>
>> Hey,
>> I have a data frame x which consists of say 10 vectors. I essentially want
>> to find out the best fit exponential smoothing for each of the vectors.
>>
>> The problem while I'm gettin
On 06/25/2010 10:02 AM, phani kishan wrote:
Hey,
I have a data frame x which consists of say 10 vectors. I essentially want
to find out the best fit exponential smoothing for each of the vectors.
The problem while I'm getting results when i say
lapply(x,ets)
I am getting an error whe
Hey,
I have a data frame x which consists of say 10 vectors. I essentially want
to find out the best fit exponential smoothing for each of the vectors.
The problem while I'm getting results when i say
> lapply(x,ets)
I am getting an error when I say
>> myprint
function(x)
{
for(i in 1:length(x))
On Apr 30, 2010, at 4:57 PM, Erik Iverson wrote:
>
>> I'm sure it's not a bug, but could someone point to a thread or offer some
>> gentle advice on what's happening? I think it's related to:
>> test <- data.frame(name1 = 1:5, name2 = 6:10, test = 11:15)
>> eval(expression(test[c("name1", "name
I'm sure it's not a bug, but could someone point to a thread or offer
some gentle advice on what's happening? I think it's related to:
test <- data.frame(name1 = 1:5, name2 = 6:10, test = 11:15)
eval(expression(test[c("name1", "name2")]))
eval(expression(interco[c("name1", "test")]))
scra
Hello!
I'm reading through a logistic regression book and using R to replicate
the results. Although my question is not directly related to this, it's
the context I discovered it in, so here we go.
Consider these data:
interco <- structure(list(white = c(1, 1, 0, 0), male = c(1, 0, 1, 0),
... forgot to post this back to the r-list.
it seems that the problem is with xts rather than zoo and yearmon per se ie
using yearmon to index xts gives inconsistent results.
grateful for any help anyone can offer.
thanks
On Sun, Apr 18, 2010 at 8:15 PM, simeon duckworth wrote:
> Hi gabor
On Sun, Apr 18, 2010 at 2:51 PM, simeon duckworth
wrote:
> Hi Gabor
>
> Thats odd. I still get the same problem with the same versions of the
> software in your mail ... viz "as.yearmon" converts 2009(1) to "Dec-2008"
We can`t conclude that its in as.yearmon based on the output shown.
What is the
Hi Gabor
Thats odd. I still get the same problem with the same versions of the
software in your mail ... viz "as.yearmon" converts 2009(1) to "Dec-2008"
and although xts is indexed at "Jan 2009" in xx, using it to create another
xts object with that index reverts to "Dec-2008".
grateful for any s
On Sun, Apr 18, 2010 at 8:25 AM, simeon duckworth
wrote:
> R-listers,
>
> I am using xts with a yearmon index, but am getting some inconsistent
> results with the date index when i drop observations (for example by using
> na.omit).
>
> The issue is illustrated in the example below. If I start wi
R-listers,
I am using xts with a yearmon index, but am getting some inconsistent
results with the date index when i drop observations (for example by using
na.omit).
The issue is illustrated in the example below. If I start with a monthly
zooreg series starting in 2009, yearmon converts this to
~
In the face of ambiguity, refuse the temptation to guess.
~~~~~~
--- On Tue, 3/9/10, Rob Forler wrote:
From: Rob Forler
Subject: [R] confused by classes and methods.
To: r-help@
Hello, I have a simple class that looks like:
setClass("statisticInfo",
representation( max = "numeric",
min = "numeric",
beg = "numeric",
current = "numeric",
avg = "numeric",
JustADude wrote:
>
>
> ...
>
> By any chance is there more documentation out there on lists and this
> behavior, as I would like to try to better understand what is really going
> on and why one approach works and another doesn't.
>
> ...
> Example reproduced below
>
You forgot an assig
Through help from the list and a little trial and error (mainly error) I think
I figured out a couple of ways to append to a list. Now I am trying to access
the data that I appended to the list. The example below shows where I'm
trying to access that information via two different methods. I
Hi all,
I want to use the npudens() function in the np package (multivariate
kernel density estimation), but was confused by the several functions in the
following codes,expand.grid(),array(),image() and npudensbw().
This confusion will only be generated in >=3 dimensions. I marked the four
pla
Interesting point.
Our data is NOT continuous. Sure, some of the test examples are older
than others, but there is no relationship between them. (More Markov
like in behavior.)
When creating a specific record, we actually account for this in our SQL
queries which tend to be along the lines of
On Mon, Sep 7, 2009 at 1:22 PM, Noah Silverman wrote:
>
> The data is listed in our CSV file from newest to oldest. We are supposed
> to calculated a valued that is an "average" of some items. We loop through
> some queries to our database and increment two variables - $total_found and
> $total_
You both make good points.
Ideally, it would be nice to know WHY it works.
Without digging into too much verbiage, the system is designed to
predict the outcome of certain events. The "broken" model predicts
outcomes correctly much more frequently than one with the broken data
withheld. So,
You both make good points.
Ideally, it would be nice to know WHY it works.
Without digging into too much verbiage, the system is designed to
predict the outcome of certain events. The "broken" model predicts
outcomes correctly much more frequently than one with the broken data
withheld. So,
On Mon, Sep 7, 2009 at 12:33 PM, Noah Silverman wrote:
> So, this is really a philosophical question. Do we:
> 1) Shrug and say, "who cares", the SVM figured it out and likes that bad
> data item for some inexplicable reason
> 2) Tear into the math and try to figure out WHY the SVM is predi
Predicting whilst confused is unlikely to produce sound predictions...
my vote is for finding out why before believing anything.
>>> Noah Silverman 09/07/09 8:33 PM >>>
Hi,
I have a strange one for the group.
We have a system that predicts probabilities using a fairly standard svm
(e1017). We
Hi,
I have a strange one for the group.
We have a system that predicts probabilities using a fairly standard svm
(e1017). We are looking at probabilities of a binary outcome.
The input data is generated by a perl script that calculates a bunch of
things, fetches data from a database, etc.
I posted the question below about a month ago but received no response.
I still have not been able to figure out what is happening.
I also noticed another oddity. When the data part of the object is a
multivariate time series, it doesn't show up in the structure, but it
can be treated as a multiva
I am trying to define an S4 class that contains a ts class object, a
simple
example is shown in the code below. However, when I try to create a new
object
of this class the tsp part is ignored, see below. Am I doing something
wrong,
or is this just a peril of mixing S3 and S4 objects?
> setClas
This was also posted on R-sig-mac, and I've answered it there.
Please don't cross-post.
On Wed, 15 Oct 2008, Gang Chen wrote:
When invoking dev.new() on my Mac OS X 10.4.11, I get an X11 window
instead of quartz which I feel more desirable. So I'd like to set
the default device to quartz. Howe
When invoking dev.new() on my Mac OS X 10.4.11, I get an X11 window
instead of quartz which I feel more desirable. So I'd like to set
the default device to quartz. However I'm confused because of the
following:
> Sys.getenv("R_DEFAULT_DEVICE")
R_DEFAULT_DEVICE
"quartz"
> getOption("devi
Hi Mark, sorry for the late response. I am now moving to my new job... When
you use cor.balance() to estimate the correlation matrix, it is able to
handle many variables (genes) at one time. What is returned should be a 497
by 497 correlation matrix. But when you use cor.LRtest() to calculate
P-va
After some struggling with the data format, non-standard in
BioConductor, I have gotten cor.balance in package CORREP to work. My
desire was to obtain maximum-likelihood p-values from the same data
object using cor.LRtest, but it appears that this function wants
something different, which I can
Hi,
I am reposting this as I fear my original post (on Oct. 4th) got
buried by all the excitement of the R 2.6 release...
I had a first occasion to try multiple comparisons (of intercepts, I
suppose) following a significant result in an ANCOVA. As until now I
was doing this with JMP, I comp
Hi,
I had a first occasion to try multiple comparisons (of intercepts, I
suppose) following a significant result in an ANCOVA. As until now I
was doing this with JMP, I compared my results and the post-hoc
comparisons were different between R and JMP.
I chose to use an example data set from
75 matches
Mail list logo