On 01/06/16 15:27, Bert Gunter wrote:
On Tue, May 31, 2016 at 7:05 PM, Jeff Newmiller
wrote:
You need to go back and study how I made my solution reproducible and make your
problem reproducible.
You probably also ought to spend some time comparing the regex pattern to your
actual data... the
On Tue, May 31, 2016 at 7:05 PM, Jeff Newmiller
wrote:
> You need to go back and study how I made my solution reproducible and make
> your problem reproducible.
>
> You probably also ought to spend some time comparing the regex pattern to
> your actual data... the point of this list is to learn
You need to go back and study how I made my solution reproducible and make your
problem reproducible.
You probably also ought to spend some time comparing the regex pattern to your
actual data... the point of this list is to learn how to construct these
solutions yourself.
--
Sent from my pho
Thank you so much Jeff. It worked for this example.
When I read it from a file (c:\data\test.txt) it did not work
KLEM="c:\data"
KR=paste(KLEM,"\test.txt",sep="")
indta <- readLines(KR, skip=46) # not interested in the first 46 lines)
pattern <- "^.*group (\\d+)[^:]*: *([-+0-9.eE]*).*$"
firstli
Hi Georg,
You may find the "add.value.labels" function in the prettyR package useful.
Jim
On Tue, May 31, 2016 at 10:00 PM, wrote:
> Hi All,
>
> I am using R for social sciences. In this field I am used to use short
> variable names like "q1" for question 1, "q2" for question 2 and so on and
>
Use power
> log(78,10)
[1] 1.892095
> 10^log(78,10)
[1] 78
On Tue, May 31, 2016 at 4:14 PM, Carlos wrote:
> The following function can do the work as well
>
> antilog<-function(lx,base)
> {
> lbx<-lx/log(exp(1),base=base)
> result<-exp(lbx)
> result
> }
>
> This solution is based on the ch
The following function can do the work as well
antilog<-function(lx,base)
{
lbx<-lx/log(exp(1),base=base)
result<-exp(lbx)
result
}
This solution is based on the change of base formula which states that :
log (x,base=b) = log(x,base=a)/log(b,base=a)
The original logarithm is changed into
Thank you, Bert. That's perfect! I will do.
On 31 May 2016 21:43, "Bert Gunter" wrote:
> Briefly, as this is off-topic, and inline:
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "B
Probably impossible to answer without your following the posting guide
and posting your code, etc.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On
Briefly, as this is off-topic, and inline:
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, May 31, 2016 at 11:32 AM, Dan Kolubinski wrote:
> That makes per
Standard reply (see posting guide):
Update to the current version of R (3.3.0 or so) and retry. Your
version is old -- this often leads to incompatibilities with newer
software versions.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking
Wild guess: You have huge and high dimensional VAR models, i.e. the
matrices get huge and you use huge amounts of memory and you use more
than what is available physically. The operating system protects itself
by killing processes in such a case...
Best,
Uwe Ligges
On 31.05.2016 20:29, Vivek
Hi,
I am exactly replicating the SEM model which was done in SAS using Proc Calis
in R.
Used sem package in R but not getting the GFI as same as in SAS (approximately
15% difference)
and also one link is insignificant but in SAS am getting significant.
Searched through online in different blogs b
Hi,
I am using VARS (vector autoregressive model). The process gets killed
after running for sometime. Following is the output of R.
vivek@isds-research:~/cloudAuction/padding/panel$ cat var.Rout
R version 3.0.2 (2013-09-25) -- "Frisbee Sailing"
Copyright (C) 2013 The R Foundation for Statistica
That makes perfect sense. Thank you, Michael. I take your point about not
chasing the data and definitely see the risks involved in doing so. Our
hypothesis was that the first, second and fourth variables would be
significant, but the third one (intervention) would not be. I will
double-check t
Then perhaps your example should illustrate one of these "many situations" that
trouble you but you are not being clear about.
--
Sent from my phone. Please excuse my brevity.
On May 31, 2016 11:39:04 AM PDT, Santosh wrote:
>I agree that performing merge outside the scope of "within" function,
I agree that performing merge outside the scope of "within" function, is
pretty straight forward.. At times there are situations when many, if not
all, of the operations are needed to be done within the scope the "within"
environment..
Thanks so much..
Regards,
Santosh
On Tue, May 31, 2016 at 11:
What is complicated about merge( q, r )?
Keep in mind that there is nothing simple about the rules for non-standard
evaluation of variables that within() uses, and it only gets more complicated
if you try to apply those rules to two data frames at once. While I am not
quite sure I understand wh
Thanks for response.. I want to merge two data frames using "within"
function..the columns to used for merge could vary.. then the other
commands become simpler..
Thanks so much for your help!
Santosh
On Sat, May 28, 2016 at 1:53 PM, Duncan Murdoch
wrote:
> On 27/05/2016 7:00 PM, Santosh wrote:
Thanks, Sarah, added now in the devel-package on R-Forge.
Z
On Tue, 31 May 2016, Sarah Goslee wrote:
On Tue, May 31, 2016 at 11:09 AM, Jeff Newmiller
wrote:
However, please don't apply R like a magic answers box, because you can mislead
others and cause harm.
On Tue, 31 May 2016, T.Riedle wrote:
Many thanks for your feedback.
If I get the code for the waldtest right I can calculate the Chi2 and
the F statistic using waldtest().
Yes. In a logit model you would usually use the chi-squared statistic.
Can I use the waldtest() without using bread()/
There are lots of ways to handle this kind of thing, and the other
suggestions are good. But specific to your "something like" idea, see the
output of
Sys.info()
in particular
Sys.info()['nodename']
Sys.info()['user']
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East
On Tue, May 31, 2016 at 11:09 AM, Jeff Newmiller
wrote:
>
>
> However, please don't apply R like a magic answers box, because you can
> mislead others and cause harm.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.eth
In every activity, knowing something about it allows you to avoid repeating the
mistakes of the past. There are non-statistical uses of programming languages,
so you could use it for domains you are familiar with. Or you could see some
intriguing statistical analysis and study in that area to un
Many thanks for your feedback.
If I get the code for the waldtest right I can calculate the Chi2 and the F
statistic using waldtest(). Can I use the waldtest() without using bread()/
estfun()? That is, I estimate the logit regression using glm() e.g.
logit<-glm(...) and insert logit into the wa
Dear Prasad
If you want to use R to do statistics then statistical knowledge is
essential. If you want to use R to do one of the many, many other things
it can do then you only need knowledge of whichever of those is your target.
On 31/05/2016 08:22, Prasad Kale wrote:
Hi,
I am very new to
Dear R users,
I am trying to calculate NAV of portfolio using Return.portfolio function in
PerformanceAnalytics Package. I am having difficulties with how I should
specify weight in the function.
I tried to replicate using fixed weights with rebalance_on = "months" by
specifying weights expl
I am not sure this is relevant or helpful, but see ?abbreviate, which
one can use to abbreviate long strings as labels (but only for
English-like languages, I believe).
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Op
Greetings Prasad,
Here are some tutorials on statistics using R.Statistics and Actuarial
Science – Carl James Schwarz
http://people.stat.sfu.ca/~cschwarz/CourseNotes/
Statistics and Actuarial Science – Carl James Schwarz - Programs
http://people.stat.sfu.ca/~cschwarz/Stat-650/Notes/MyPrograms
Inline.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Tue, May 31, 2016 at 12:05 AM, Michael Haenlein
wrote:
> Dear all,
>
> I am running a si
Assume everyone will begin their work in a suitable working directory for their
computer. Put data in that working directory or some directory "near" it. Then
use relative paths to the data instead of absolute paths (don't use paths that
start with "/"). I usually start by reading in a "configur
Hi
your message is rather scrambled and to be honest not well understandable (by
me).
having two column matrix
> mat<-matrix(1:8, 4,2)
> mat
[,1] [,2]
[1,]15
[2,]26
[3,]37
[4,]48
You can calculate eg. distance
> dist(mat, diag=T, upper=T)
1
On Tue, May 31, 2016 at 5:44 AM, Nikolai Stenfors <
nikolai.stenf...@gapps.umu.se> wrote:
> We conduct medical research and our datafiles therefore contain sensitive
> data, not to be shared in the cloud (Dropboc, Box, Drive, Bitbucket,
> GitHub).
> When we collaborate on a r-analysis-script, we s
Hi
Well, it seems to me like cooking.
You does not have to be educated cook to be able prepare some food in your
kitchen, but knowledge of some recipes can lead to tasty results
Regards
Petr
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Prasad
>
My general approach to this is to put the function for loading data
into a separate file which is then sourced in the main analysis file.
Occasionally I'll use a construct like:
if file.exists("loadData_local.R")
{
source("loadData_local.R")
}else{
source("loadData_generic.R")
}
Whe
On Tue, May 31, 2016 at 2:22 AM, Prasad Kale
wrote:
> Hi,
>
> I am very new to R and just started learning R. But i am not from
> statistical background so can i learn R or to learn R statistical
> background is must.
>
Well, I got a B.Sc. in Math back many years ago. I "earned" a C- in
Statist
Hi
see in line
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
> g.maub...@weinwolf.de
> Sent: Tuesday, May 31, 2016 2:01 PM
> To: r-help@r-project.org
> Subject: [R] Variable labels and value labels
>
> Hi All,
>
> I am using R for social sciences.
here is the solution to your question
test <- data.frame(C1=c('a,b,c,d'),C2=c('g,h,f'))
you should use gsub instead sub if you want it to be on all elements of
each column
tFun <- function(x) {gsub(",",";",x)}
newTest <- apply(test, 2, tFun )
Cheers,
[[alternative HTML version deleted]
Hi All,
I was new to R and this list a couple of mounths ago. When processing my
data I got tremendous support from R-Help mailing list.
The solutions I have worked out with your help might be also helpful for
others. I have put the solutions in a couple of small functions with
documentation a
We conduct medical research and our datafiles therefore contain sensitive
data, not to be shared in the cloud (Dropboc, Box, Drive, Bitbucket, GitHub).
When we collaborate on a r-analysis-script, we stumble upon the following
annoyance. Researcher 1 has a line in the script importing the sensitive
Hi,
I am very new to R and just started learning R. But i am not from
statistical background so can i learn R or to learn R statistical
background is must.
Please guide.
Thanks in Advance
Prasad
[[alternative HTML version deleted]]
__
R-help@
Hi
I do not consider myself as an expert in factorial design but why do you insist
on 4 levels in factors. My opinion is that you need more than 2 levels only if
you expect and you want to evaluate nonlinear relationship of the response on
such factor.
If you used only 2 levels you could find
*Hi Group
**I have a large data set of individual pairwise values (100 rows)
**that I** need to reshape into a pairwise matrix for mantel tests of
similarity these values** .
**I need this matrix for a Pathfinder network analysis. *
*I have a different data(word) such as :*
living thing
0
On Tue, 31 May 2016, T.Riedle wrote:
I understood. But how do I get the R2 an Chi2 of my logistic regression
under HAC standard errors? I would like to create a table with HAC SE
via e.g. stargazer().
Do I get these information by using the functions
bread.lrm <- function(x, ...) vcov(x) * n
Hi All,
I am using R for social sciences. In this field I am used to use short
variable names like "q1" for question 1, "q2" for question 2 and so on and
label the variables like q1 : "Please tell us your age" or q2 : "Could you
state us your household income?" or something similar indicating w
Hi,
thank you for your answer. To tackle down the problem, I tried this
(modified from your code):
thickticks <- c(0,60,130,210,290,370,450,530,610,690,770,850,930)
png("test.png",width=864,height=834,res=150)
plot(seq(0,1000),rep(10,1001),xaxt="n")
axis(1,seq(0,1000,by=10),at=seq(0,1000,by=1
I understood. But how do I get the R2 an Chi2 of my logistic regression under
HAC standard errors? I would like to create a table with HAC SE via e.g.
stargazer().
Do I get these information by using the functions
bread.lrm <- function(x, ...) vcov(x) * nobs(x)
estfun.lrm <- function(x, ...) r
In-line
On 30/05/2016 19:27, Dan Kolubinski wrote:
I am completing a meta-analysis on the effect of CBT on low self-esteem and
I could use some help regarding the regression feature in metafor. Based
on the studies that I am using for the analysis, I identified 4 potential
moderators that I wan
On Mon, 30 May 2016, Leonardo Ferreira Fontenelle wrote:
Em Sáb 28 mai. 2016, às 15:50, Achim Zeileis escreveu:
On Sat, 28 May 2016, T.Riedle wrote:
> I thought it would be useful to incorporate the HAC consistent
> covariance matrix into the logistic regression directly and generate an
> out
You were clearly mistaken.
dataframe$column is almost the same as dataframe[["column"]], except that the $
does partial matching. Both of these "extract" a list element.
A data frame is a list where all elements are vectors of the same length. A
list is a vector where each element can refer
Dear all,
I am running a simulation experiment with 8 factors that each have 4
levels. Each combination is repeated 100 times. If I run a full factorial
this would mean 100*8^4 = 409,600 runs.
I am trying to reduce the number of scenarios to run using a fractional
factorial design. I'm interested
51 matches
Mail list logo