[ 
https://issues.apache.org/jira/browse/KYLIN-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18070387#comment-18070387
 ] 

ASF GitHub Bot commented on KYLIN-4889:
---------------------------------------

codecov-commenter commented on PR #1565:
URL: https://github.com/apache/kylin/pull/1565#issuecomment-4171605523

   ## 
[Codecov](https://app.codecov.io/gh/apache/kylin/pull/1565?dropdown=coverage&src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
 Report
   :x: Patch coverage is `84.61538%` with `2 lines` in your changes missing 
coverage. Please review.
   :warning: Please [upload](https://docs.codecov.com/docs/codecov-uploader) 
report for BASE (`kylin-on-parquet-v2@5650a4e`). [Learn 
more](https://docs.codecov.io/docs/error-reference?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache#section-missing-base-commit)
 about missing BASE report.
   
   | [Files with missing 
lines](https://app.codecov.io/gh/apache/kylin/pull/1565?dropdown=coverage&src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
 | Patch % | Lines |
   |---|---|---|
   | 
[...in/scala/org/apache/spark/sql/SparderContext.scala](https://app.codecov.io/gh/apache/kylin/pull/1565?src=pr&el=tree&filepath=kylin-spark-project%2Fkylin-spark-query%2Fsrc%2Fmain%2Fscala%2Forg%2Fapache%2Fspark%2Fsql%2FSparderContext.scala&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache#diff-a3lsaW4tc3BhcmstcHJvamVjdC9reWxpbi1zcGFyay1xdWVyeS9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL3NwYXJrL3NxbC9TcGFyZGVyQ29udGV4dC5zY2FsYQ==)
 | 83.33% | [1 Missing and 1 partial :warning: 
](https://app.codecov.io/gh/apache/kylin/pull/1565?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache)
 |
   
   <details><summary>Additional details and impacted files</summary>
   
   
   
   ```diff
   @@                  Coverage Diff                   @@
   ##             kylin-on-parquet-v2    #1565   +/-   ##
   ======================================================
     Coverage                       ?   24.25%           
     Complexity                     ?     4623           
   ======================================================
     Files                          ?     1144           
     Lines                          ?    65301           
     Branches                       ?     9590           
   ======================================================
     Hits                           ?    15841           
     Misses                         ?    47808           
     Partials                       ?     1652           
   ```
   </details>
   
   [:umbrella: View full report in Codecov by 
Sentry](https://app.codecov.io/gh/apache/kylin/pull/1565?dropdown=coverage&src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache).
   
   :loudspeaker: Have feedback on the report? [Share it 
here](https://about.codecov.io/codecov-pr-comment-feedback/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache).
   <details><summary> :rocket: New features to boost your workflow: </summary>
   
   - :snowflake: [Test 
Analytics](https://docs.codecov.com/docs/test-analytics): Detect flaky tests, 
report on failures, and find test suite problems.
   - :package: [JS Bundle 
Analysis](https://docs.codecov.com/docs/javascript-bundle-analysis): Save 
yourself from yourself by tracking and limiting bundle sizes in JS merges.
   </details>




> Query error when spark engine in local mode
> -------------------------------------------
>
>                 Key: KYLIN-4889
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4889
>             Project: Kylin
>          Issue Type: Bug
>    Affects Versions: v4.0.0-alpha
>            Reporter: Feng Zhu
>            Assignee: Feng Zhu
>            Priority: Major
>             Fix For: v4.0.0
>
>
> When i query with spark engine in local mode, with -Dspark.local=true, the 
> spark application was still submitted to yarn, and the following error 
> occurred:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 6, sandbox.hortonworks.com, executor 1): java.lang.ClassCastException: 
> cannot assign instance of scala.collection.immutable.List$SerializationProxy 
> to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of 
> type scala.collection.Seq in instance of 
> org.apache.spark.rdd.MapPartitionsRDD at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
>  at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405) 
> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2291) 
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at 
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at 
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at 
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
>  at 
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
>  at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:88) at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at 
> org.apache.spark.scheduler.Task.run(Task.scala:123) at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Driver stacktrace: while executing 
> SQL: "select * from (select KYLIN_SALES.PART_DT , sum(KYLIN_SALES.PRICE ) 
> from KYLIN_SALES group by KYLIN_SALES.PART_DT union select 
> KYLIN_SALES.PART_DT , max(KYLIN_SALES.PRICE ) from KYLIN_SALES group by 
> KYLIN_SALES.PART_DT union select KYLIN_SALES.PART_DT , count(*) from 
> KYLIN_SALES group by KYLIN_SALES.PART_DT union select KYLIN_SALES.PART_DT , 
> count(distinct KYLIN_SALES.PRICE) from KYLIN_SALES group by 
> KYLIN_SALES.PART_DT) limit 501"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to