Personally I liked two workshops Thomas Lin Pedersen gave:
https://www.youtube.com/watch?v=h29g21z0a68
https://www.youtube.com/watch?v=0m4yywqNPVY&t=5219s
-Roy
> On Nov 18, 2020, at 3:24 PM, John via R-help wrote:
>
> On Tue, 17 Nov 2020 12:43:21 -0500
> C W wrote:
>
>> Dear R list,
>>
>> I
On Tue, 17 Nov 2020 12:43:21 -0500
C W wrote:
> Dear R list,
>
> I am an old-school R user. I use apply(), with(), and which() in base
> package instead of filter(), select(), separate() in Tidyverse. The
> idea of pipeline (i.e. %>%) my code was foreign to me for a while. It
> makes the code sh
Hi Gayathri,
Maybe the cmscu package?
https://github.com/jasonkdavis/r-cmscu
Jim
On Thu, Nov 19, 2020 at 6:30 AM Gayathri Nagarajan <
gayathri.nagara...@gmail.com> wrote:
> Hi Team
>
> Iam a new learner trying to build n gram models from text corpus and trying
> to understand the modified knese
On 17/11/2020 12:43 p.m., C W wrote:
Dear R list,
I am an old-school R user. I use apply(), with(), and which() in base
package instead of filter(), select(), separate() in Tidyverse. The idea of
pipeline (i.e. %>%) my code was foreign to me for a while. It makes the
code shorter, but sometimes
I'd recommend two places to get started:
* https://r4ds.had.co.nz/data-visualisation.html for a quick intro to
ggplot2 (and the rest of the book explains the general tidyverse
philosophy)
* https://ggplot2-book.org for the full details of ggplot2.
Hadley
On Wed, Nov 18, 2020 at 11:37 AM C W wr
I should have said: Have you worked through the Vignettes and examples??
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Nov 18, 2020 at 9:37 AM C W wrote
Wrong list!
Google "kneser Ney smoothing algorithm" for possibilities.
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Nov 18, 2020 at 11:30 AM Gayathri N
Hi Team
Iam a new learner trying to build n gram models from text corpus and trying
to understand the modified kneser Ney smoothing algorithm to code and build
my word prediction model.
Can someone point me to a vignette or tutorial that will help me learn this
?
Thanks in advance for your help
RTFM, perhaps?
Or even worse, buy his book?
el
—
Sent from Dr Lisse’s iPad Mini 5
On 18 Nov 2020, 20:39 +0200, Ben Tupper , wrote:
> Hi,
>
> I feel your pain. As you have likely discovered yourself, there are
> just about 10^14 tutorials/posts/tips out there on ggplot2. See
> https://rseek.org/?
Hi,
I feel your pain. As you have likely discovered yourself, there are
just about 10^14 tutorials/posts/tips out there on ggplot2. See
https://rseek.org/?q=+ggplot2+tutorial for example. Yikes!
One resource I found most helpful when I started is
https://evamaerey.github.io/ggplot_flipbook/gg
This is not the place for tutorials (although I recognize that many
responses and discussions do intersect tutoriality).
If you do a web search on ggplot tutorials you will find many good ones. Or
go to the RStudio website which links to resources, including Hadley
Wickham's book, which is probably
Dear R list,
I am an old-school R user. I use apply(), with(), and which() in base
package instead of filter(), select(), separate() in Tidyverse. The idea of
pipeline (i.e. %>%) my code was foreign to me for a while. It makes the
code shorter, but sometimes less readable?
With ggplot2, I just do
I will do that...
Thanks again Jeff.
r/
Gregg Powell
‐‐‐ Original Message ‐‐‐
On Wednesday, November 18, 2020 8:36 AM, Jeff Newmiller
wrote:
> Instead, learn how to use the merge function, or perhaps the dplyr::left_join
> function. VLOOKUP is really not necessary.
>
> On Novemb
Instead, learn how to use the merge function, or perhaps the dplyr::left_join
function. VLOOKUP is really not necessary.
On November 18, 2020 7:11:49 AM PST, Gregg via R-help
wrote:
>Thanks Andrew and Mitch for your help.
>
>With your assistance, I was able to sort this out.
>
>Since I have to
Thanks Andrew and Mitch for your help.
With your assistance, I was able to sort this out.
Since I have to do this type of thing of often, and since there is no existing
package/function (yet) that makes this easy, if ever I get to the point were I
develop enough skill to build and submit a new
Maybe this could be interesting to verify against found anomalies?
"A second memory card with uncounted votes was found during an audit in
Fayette County, Georgia, containing 2,755 votes"
https://www.zerohedge.com/political/second-memory-card-2755-votes-found-during-georgia-election-audit-decre
Thanks, everyone!
Quoting Jim Lemon :
Oops, I sent this to Tom earlier today and forgot to copy to the list:
VendorID=rep(paste0("V",1:10),each=5)
AcctID=paste0("A",sample(1:5,50,TRUE))
Data<-data.frame(VendorID,AcctID)
table(Data)
# get multiple vendors for each account
dupAcctID<-colSums(t
Oops, I sent this to Tom earlier today and forgot to copy to the list:
VendorID=rep(paste0("V",1:10),each=5)
AcctID=paste0("A",sample(1:5,50,TRUE))
Data<-data.frame(VendorID,AcctID)
table(Data)
# get multiple vendors for each account
dupAcctID<-colSums(table(Data)>0)
Data$dupAcct<-NA
# fill in the
On Wed, Nov 18, 2020 at 5:40 AM Bert Gunter wrote:
>
> z <- with(Data2, tapply(Vendor,Account, I))
> n <- vapply(z,length,1)
> data.frame (Vendor = unlist(z),
>Account = rep(names(z),n),
>NumVen = rep(n,n)
> )
>
> ## which gives:
>
>Vendor Account NumVen
> A1 V1 A1 1
> A
19 matches
Mail list logo