Hello,
See ?scale
scale(DATA) # mean 0, unit variance
Hope this helps,
Rui Barradas
Em 06-04-2013 21:21, Beatriz González Domínguez escreveu:
Dear all,
I’m finding difficulties to normalize this data. Could you provide some input?
DATA:
c(0.000103113, 0.000102948, 0.000104001, 0.0001037
Dear all,
Iâm finding difficulties to normalize this data. Could you provide some input?
DATA:
c(0.000103113, 0.000102948, 0.000104001, 0.000103794, 0.000104628,
9.2765e-05, 9.4296e-05, 9.5025e-05, 9.4978e-05, 9.8821e-05, 9.7586e-05,
9.6285e-05, 0.00010158, 0.000100919, 0.000103535, 0.00010332
1 3 3 0.6094344 0.7128677
12 3 5 0.6550063 0.7661741
A.K.
- Original Message -
From: york8866
To: r-help@r-project.org
Cc:
Sent: Tuesday, June 19, 2012 8:08 PM
Subject: [R] data normalization
I have a dataframe such like the following:
ID TIME DV
1 0 0.88
Thanks for the help,
it works very well.
--
View this message in context:
http://r.789695.n4.nabble.com/data-normalization-tp4633939p4633964.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://s
assuming that the entries for each subject are ordered with respect to
TIME and each subject has a measurement at TIME = 0, then you could use
the following:
Data <- read.table(textConnection("ID TIMEDV
1 0 0.880146038
1 1 0.88669051
1 3 0.610784702
1 5 0.75604
2 0
I have a dataframe such like the following:
ID TIMEDV
1 0 0.880146038
1 1 0.88669051
1 3 0.610784702
1 5 0.75604
2 0 0.456263368
2 1 0.369991537
2 3 0.508798346
2 5 0.441037014
3 0
6 matches
Mail list logo