sages for each command recorded below:
token <-
tokenize("/Users/Gordon/Desktop/WPSCASESONE/",lang="en",doc_id="sample")
The code is the same for the other folders; the name of the folder is
different, but otherwise identical.
The error message reads:
*Error
Thanks; that's a good point. Here is what I have been working with:
library(quanteda)
library(readtext)
texts <- readtext(paste0("/Users/Gordon/Desktop/WPSCASES/", "/word/*.docx"))
And the error message:
Error in list_files(file, ignore_missing, TRUE, verbosity) :
like koRpus would be limited in usefulness if there were no way to
get the package to do its work on a large collection of files all at once.
I hope this problem will make sense to someone, and that there is a tenable
solution to it.
Thanks,
Gordon
[[alternative HTML version deleted
There is an interesting item on stringsAsFactors in this useR! 2020 session:
https://www.youtube.com/watch?v=X_eDHNVceCU&feature=youtu.be
It's about 27 minutes in.
Chris Gordon-Smith
On 15/07/2020 17:16, Marc Schwartz via R-help wrote:
>> On Jul 15, 2020, at 4:31 AM, andy
Started having a problem installing packages where R can't find base unix
commands. I've put an example below (other packages have identical "command not
found" errors, sometimes with different commands e.g. sh) and my
PATH/.Renviron(where the problem likely is). I'm on MacOS. Thanks in advance
Hello,
I have interval censored data with some late-entry subjects. Does any R package
for survival analysis with interval censored data allows for left truncation
and if so how?
Thanks and Best Wishes,
Fabiana
[[alternative HTML version deleted]]
__
Hello,
I have interval censored data with some late-entry subjects. Does any R package
for survival analysis with interval censored data allows for left truncation
and if so how?
Thanks and Best Wishes,
Fabiana
[[alternative HTML version deleted]]
__
ssage-
From: Bert Gunter [mailto:bgunter.4...@gmail.com]
Sent: 27 April 2016 21:18
To: Jeff Newmiller
Cc: Gordon, Fabiana; r-help@R-project.org
Subject: Re: [R] Create a new variable and concatenation inside a "for" loop
...
"(R is case sensitive, so "C" has no such proble
e code would be something like this,
for i=1:5
A(:, 2*i-1:2*i)= sin(X(:, 2*i-1:2*i)) % the ":" symbol indicates all rows
end
Many Thanks,
Fabiana
Dr Fabiana Gordon
Senior Statistical Consultant
Statistical Advisory Service, School Of Public Health,
Imperial College London
1st F
I'm having a similar problem , did u get a resolution ?
--
View this message in context:
http://r.789695.n4.nabble.com/Replacing-sets-of-rows-in-matrix-within-a-loop-tp4634658p4648248.html
Sent from the R help mailing list archive at Nabble.com.
__
Hi -
I'm using the tune.nnet() to identify the optimal tuning parameters for a
dataset in which the dependent variables is binary and has very few 1s. As
a result, tune.nnet() provides parameters that give high accuracy, but by
increasing the specificity and decreasing the sensitivity radically. D
Hello,
I am using vegan to do an NMDS plot and I would like to suppress the labels
for the loading vectors. Is this possible? Alternatively, how can I avoid
overlap?
Many thanks for the help.
Example code:
#perform NMDS using metaMDS() function
spe.nmds<-metaMDS(data, distance='bray',k=2 , eng
Dear R users,
I'm a new user to R and have a data set consisting of a number of variables (in
a data frame). I wish to carry out a regression analysis of the first variable
against all the rest in turn. I have used the following code to do this
dd<-read.table("for loop.txt",header=T)
for (j in
the yellow region, and the cluster that should be black -is- black.
Again, I apologize if I'm missing something simple. Thanks for your help in
understanding this behaviour.
Gordon
--
sessionInfo()
R version 2.13.1 (2011-07-08)
Platform: i386-apple-darwin9.8.0/i386 (32-bit)
locale:
Hi,
I posted this question on stats.stackexchange.com 3 days ago but the
answer didn't really address my question concerning the speed in
competing risk regression. I hope you don't mind me asking it in this
forum:
I’m doing a registry based study with almost 200 000 observations and
I want to pe
Never mind, I find a generic solution:
require(reshape)
melted<-melt(dataframe, id=c("id","f1","f2"))
averaged=cast(melted,id+f1~variable,mean)
which collapses away "f2", and it's easy to generalize this to collapse
any factors.
Thanks anyway
preserve id and f1, but want to collapse f2 and take the
corresponding mean values of a and b. Missing value in either dataframe
should be handled properly (i.e., just take the non-missing number
without dividing by 2).
I had a look at rowSum/Means and s/l/ta
We would like to use the qrnn package for building a quantile linear ridge
regression.
To this end we need to use the function qrnn.rbf.
The meaning of the second argument x.basis, isn't clear to me.
What should I give it as an argument? Does the contents of this matrix have
any meaning or only it
;blue" shades appear in panel 1, the reds in panel
2 and a mixture only in panel 3. Can anyone help explain why this is not
happening?
I am using
> sessionInfo()
R version 2.11.1 (2010-05-31)
i386-pc-mingw32
locale:
[1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United
Kingd
in another dataframe.
Gordon
On Sat, Mar 13, 2010 at 7:54 PM, Junqian Gordon Xu wrote:
> I have a multilevel dataframe (df):
>
> ID Date Segment Slice Tract Lesion
> 1 CSPP005 12/4/2007 1 1 LCST 0
> 2 CSPP005 12/4/2007 1 1 LP
in row 1-6), I want to
code the "Lesion" in the 7th row as 0.
ID Date Segment Slice Tract Lesion
7 CSPP005 12/4/2007 1 1 Whole0
The whole data frame is just repeated units of the 7 different Tracts
for different ID->Date->Segment->Sl
] $Lesion <- 0
I started with (don't know if this is the right path),
Lesion2<-df[which(df$Lesion == 2),]
Where.Lesion2<-unique(Lesion2[,1:4])
Whole<-subset(df, Tract == "Whole")
But stuck at how to match the ID/Date/Segment/Slice fr
is.
On Sat, Jul 25, 2009 at 4:32 AM, Junqian Gordon Xu wrote:
Actually when I read the spreadsheet from cvs file, "S1-[abcd]" are the
header and "T1-[abcd]" are the strings in first column of the data frame.
Gordon
On 07/25/2009 03:13 AM, jim holtman wrote:
It it not entirely
Actually when I read the spreadsheet from cvs file, "S1-[abcd]" are the
header and "T1-[abcd]" are the strings in first column of the data frame.
Gordon
On 07/25/2009 03:13 AM, jim holtman wrote:
It it not entirely clear what the format of your data is. If you have
a data
;"T1-A""T1-A""T1-A"
"T2-A""T2-A""T2-A""T2-A"
"S1-b" "S2-b" ...
"T1-B" ...
"T1-B" ..
ctor and the number of extra error
bars equals to the number of vectors with NA value present.
Is this a bug?
Thanks
Gordon
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R
of the for loop.
My test for loops only save the data from last iteration of the loop.
Am I missing something really simple?
Any recommended reference on loop and control structures in R?
Thanks,
Gord
--
Gordon J. Holtslander
gordon.holtslan...@usask.ca
Dept. of Biology
University of Saskat
step only. But your approach to the whole problem sidesteps
that. It's more elegant, as well as faster, than the way I thought of
the problem.
Gordon
Stavros Macrakis wrote:
> On Wed, Feb 25, 2009 at 9:25 AM, Fox, Gordon wrote:
>
>> The tricky part isn't finding the com
." Several of these suggestions
solve the problem nicely!
Gordon
P.S. The numbers involved will never be very large -- these are
dimensions of areas in which trees were sampled, in meters. They'll
always be on the order of 50-100m or so on a side.
--
Dr. Gordon A. Fox Voice: (81
nswer - the question is
how to get it from our list of common factors, other than by brute
force.
Thanks for any advice.
Gordon
--
Dr. Gordon A. Fox Voice: (813)974-7352 Fax: (813)974-3263
Dept. of Integrative Biology ((for US mail:)SCA 110) ((for FedEx
etc:)NES 107)
Univ. of
anyone
know of an implementation that does this? I searched both the internet
and the help archives and was unable to find anything. Thanks in
advance.
Bernard Gordon
This is not an offer (or solicitation of an offer) to buy/sell
31 matches
Mail list logo