What I am looking for is a script like following:

./regression -Z <aws instance size - default i3.2xlarge, but please use 
i3.large whenever possible> deploy \
  -n <number of instances> \
  -o <Operating system - default centos7> \
  -g <path to a GemFire build TAR file> \
  -u <your username for correlation in hydradb - required> \
  -t <your teamname for display by regression tools: optional - no default> \
  -k <public ssh key generated by you to access instances once they are 
launched (id_rsa.pub).> \
  -F <path to local functional test file for upload: optional> \
  <regression-name used to uniquely identify your regression>

  Example:
    ./regression deploy -n 10 -o centos7 \
        -g 
~/gemfire/closed/pivotalgf-assembly/build/distributions/pivotal-gemfire-regression-0.0.0.tgz
 \
        -k ~/.ssh/id_rsa.pub -u johndoe -t storageteam myregression
```

Operating systems to choose from are:
* centos7
* rhel7
* ubuntu14*
* ubuntu16*
* sles12*
* sles11*
* windows
^^^^^^^^^ ------------>

We use to have a
 script call "precheckin". I forgot if we can select operating system like 
above "regression" script. 

On 6/25/20, 4:09 PM, "Xiaojian Zhou" <zho...@vmware.com> wrote:

    I vote to is also with current/existing process (not running for every PR).

    We can create an on-request prechecking running on windows machine like 
what we did for running some regression tests, if someone really need to run it 
on windows (Actually, I'd love to have this tool)

    On 6/25/20, 1:52 PM, "Anilkumar Gingade" <aging...@vmware.com> wrote:

        Looking at the cost and value derived; My vote is with current/existing 
process (not running for every PR).

        On 6/25/20, 11:39 AM, "Mark Hanson" <mhan...@pivotal.io> wrote:

            I support adding it in, but I think the time wasted is less than 
you think. I think for me the most important thing is finding an issue when it 
is put in.

            I think the current way is actually faster and more efficient, 
because every PR doesn’t have to wait the 4 hours and in reality the number is 
of windows failures is lower than the number of linux failures.

            Just a thought.

            Thanks,
            Mark


            > On Jun 25, 2020, at 11:30 AM, Jianxia Chen <jche...@apache.org> 
wrote:
            > 
            > +1 to add Windows tests to the PR pipeline. It may take longer 
time to run
            > (up to 4 hours). But consider the time wasted on reverting, 
fixing and
            > resubmitting, if there is a failure after merging to the develop 
branch. It
            > is better to add the Windows tests to the PR pipeline. We can 
reevaluate
            > and optimize the pipeline if the long running time is truly a 
concern.
            > 
            > On Thu, Jun 25, 2020 at 9:29 AM Kirk Lund <kl...@apache.org> 
wrote:
            > 
            >> I merged some new AcceptanceTests to develop after having my PR 
go GREEN.
            >> But now these tests are failing in Windows.
            >> 
            >> I'd like to propose that we add the Windows jobs to our PR 
checks if we
            >> plan to keep testing on Windows in CI.
            >> 
            >> Please vote or discuss.
            >> 
            >> Thanks,
            >> Kirk
            >> 




Reply via email to