attached):
> [1] compiler_3.4.1 tools_3.4.1
>
>
>
>
>
> Many thanks in advance
>
>
>
> Ross
>
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To U
ve to upgrade to use it.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Statistics
University of Oxford, United Kingdom
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman
an average of ~= 13 without considering
FFB. If I assume FFB is positive, then I can easily see E(y) ~= 15 and
E(y) - 1.96 * 0.96 s.d. ~= 13.5. So ABW < 11 has zero or almost zero
probability mass.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Statistic
7;y', TR = c(9,
max(data$TR)), BU = c(15819, max(data$BU)), RF = c(2989,
max(data$RF)), n=10^6, method = "lw")
3) look at the parameters in your fitted network and diagnose why this
is happening.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department
that these classes should both typically
> return a value for ABW that is very much higher than the threshold value.
That may be, but much depends on the specific sample the model was
fitted from. How does the fitted network look like?
Cheers,
Marco
--
Marco Scutari, Ph.D.
L
rvals in the evidence.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Statistics
University of Oxford, United Kingdom
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIB
aining?
TAN does not do any sort of feature selection, so all the nodes should
be there. As for "showing the network after training", what kind of
plot are you looking for?
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of S
produce a graph + barplots plot from
Rgraphviz. I have never managed to make it work, though.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Statistics
University of Oxford, United Kingdom
__
R-help@r-project.org mai
als first.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Statistics
University of Oxford, United Kingdom
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEAS
ot;method")= chr "lw"
To compute P(A = a | whatever you conditioned on), just sum the
corresponding weights over the total weight mass. Since no language
trickery is involved, this works reliably.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of Sta
e(text="(M=='s')")),
evidence=list(lag1.M1='s'),
method = "lw")
passing str2 as a list.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Lecturer in Statistics, Department of S
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University College London (UCL), United Kingdom
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-proje
lihood weighting method will work.
> Don't
> hesitate to correct me if I'm wrong.
>From your description, likelihood weighting should be fine.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University College London (UCL), United Ki
g
variables.
Cheers,
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University College London (UCL), United Kingdom
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
do so I suggest you should install the
latest bugfix snapshot from bnlearn.com to avoid a few other bugs in
cpquery(..., method = "lw").
Cheers,
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University College London (UCL), United Kingdom
__
to reassemble observed and predicted class labels and compute your
metrics.
> I also tried the *e1071* package, but I could not find a way to do
> cross-validation.
You might be able to trick the tune() function to do it, but I am not sure.
Marco
--
Marco Scutari, Ph.D.
Research Assoc
different network every time it is run on
> the same data set?
This is not surprising, because mmhc() does not have a "start"
argument, so it's starting from the same network over and over. There
is no way to provide a random seed to mmhc(), so the only way
ne of the BN classifiers in bnlearn,
naive.bayes()/tree.bayes(), which handle the concept of a response
variable more naturally than general-purpose BNs.
Hope it helps,
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University C
kage from Korbinian Strimmer targets exactly that kind
of appilication:
http://cran.r-project.org/web/packages/GeneNet/
Regards,
Marco
--
Marco Scutari, Ph.D.
Research Associate, Genetics Institute (UGI)
University College London (UCL), United Kingdom
___
rco, author and maintainer of bnlearn.
--
Marco Scutari, Ph.D. Student
Department of Statistical Sciences
University of Padova, Italy
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting gui
ent you have to compute the (conditional) probabilities
by hand.
Best Regards,
Marco Scutari
author and maintainer of bnlearn
--
Marco Scutari
Ph.D. Student, Department of Statictical Sciences
University of Padova
"Facts don't care if you feel good about them." Slashdot,
21 matches
Mail list logo